The Fragmented Orchetra is a music composition conceived by Jane Grant, John Matthias and Nick Ryan, modeled after the human brain's neural network. In essence, it is a huge network of interactivity of sound. Here is a brief synopsis of this enormous brain-based installation:
'Soundboxes' were set up in 24 public locations across the UK. Stadiums, schools, music studios, a church, an observatory and others were included. These soundboxes were audio transceivers of sorts: they acted as both microphones and speakers. Participants (the general public) could go to each of these soundboxes, make a sound, and hear how the sound they contributed, as well as ambient sounds from the area, interacted with and changed the overall composition. For example, the soundbox at the Kielder Observatory in Northumberland may pick up sounds of wind, rain and stargazers whilst the sound box at Blueprint Studios in Manchester will pick up band performances as they happen at the studio.
Fragments of sound from each soundbox, from 30 - 500 ms in length, were received at the central 'cortex' at the Foundation of Art and Creative Technology (FACT). At FACT, the 24 sources were input into a computer. Software developed by the creators of the project manipulated the many fragments in a way fashioned after the spiking neural network model of the brain's activity. In the computer, the 24 sources interacted, and caused each other to evolve resulting in 24 new streams of sound. The result was output to a room in FACT with 24 separate speakers (each correlating to one of the locations) and listened to as a whole. It was also possible visit the project's website and create your own composition using the 24 sound sources.
This project gives me so many ideas! I imagine that the software developed for this system is quite basic when it comes to actually modeling the brain. As our understanding of the human brain becomes more clear, the development of Artificial Intellegence systems such as the fragmented orchestra's cortex will become more complex. Imagine being able to sing a musical idea to a computer and have it develop that idea much like a human would. Perhaps a computer could be used to create a sort of sonata in this way; sing a couple of themes, let the computer work and develop these ideas, you edit them as needed, the computer develops them further, etc...