Below is a text or flow diagram As now I am starting to think about the ideas around a solution for performing and connecting to cultural communities.
The latest thoughts are that this could be an integrated or holistic design that provides a solution to tactile creative expression, HCI (human computer interface), and network or community connection. Below is a text based flow diagram of my first thoughts of how such an instrument could operate. There are three modules that interconnect, and any module from 1 to 3, the instrument could be used as part of a creative solution
tactile
pick up based electro/acoustic interfaces and surfaces
sensors
Data Interface
The nature of the different properties of the instrumental interface can be utilised into unique ways. The acoustic properties can be retained and amplified as they are. Such as the mono/duo chord - this interface has the potential to output its acoustic properties directly, as does the ridged/comb surface or any other touch based interface using some sort of pickup. Here the instrument becomes a combination of acoustic and digital outputs. The acoustic properties directly output and amplified, and the digital going through data streaming and processing to be converted into sound output. The data Interface is where the output from the instrument and the data that is created is streamed, analysed, and converted into results that can be applicable to driving sonic solutions that are then output and sound.
Many solutions exist and are constantly being developed, my experience to date has been using Arduino circuit boards and sensors. These have been been connected to a computer for the software interface. In the future I will be looking into things such as raspberry pie etc to create standalone instruments, along with progrmming directly onto the Arduino or similar type circuit board.
HCI Software Processing
Again many solutions exist, and I need to do a lot more exploration in this field. To date my experience has been using Pure Data, MaxMSP, and Processing as well as the Arduino programming software. I have started to look into Sonic Pi and Supercollider. I will be looking to further develop my understanding and knowledge of using more code based applications to be able to compare and see if as a solution they offer more stability, or flexibility, especially in regards to developing a standalone instrument without the need for a separate computer to interface to.
Network Performance Integration
This is a constantly developing area for synchronised, or rather because of latency, non-synchronised creative performance. It is an exciting area for connecting communities that would otherwise face large obstacles the financial and/or physical to connect, collaborate, and share their creative practices. There is also a movement towards using this technology to reduce the need to travel because of the impact that has on the environment. Of course in the most recent times it has come essential due to isolation board about because of the coded 19 pandemic.
The goal or proposal of this integrated instrument would be to have a system that would connect directly in some way to connect, and connect in a way that is efficient to reduce latency as much as possible. I have been experimenting so far with streaming data rather than audio through a system, and this is one possible solution to explore further. I will go into more detail on this in another post
Streaming performance(s) solutions and options
Currently I have been exploring two different approaches to a creative practice using a network performance solution. The first is a User to Remote system. Here the user processes the remote performers input - audio monitored one way via round trip back to the Uuser from the Remote. Here the audio is heard individually from each end and does not rely on the interactions responding to audio latency through network inconsistencies
The second is something I will term a User to User system - collaboration of audio and or data streamed/received from both users synchronously albeit with latency.
Just for this post, and just for now there is a basic breakdown below of how I envisage or wish to experiment further with the research project for this solution
Diagram of user and remote Network Performance
This diagram shows how the user patch and the remote performer patch connect, stream data and audio between each other. The data streams from the user to the remote patch, and the audio subsequently streams from the remote patch to the user patch. The performer inputs sound to the remote patch, and here is the result of it being processed within their own system. The resulting sound from the processing and the unprocessed sound streamed back to the user. So the audio was only streamed one way, and the data is only streamed one way.
The latest thoughts are that this could be an integrated or holistic design that provides a solution to tactile creative expression, HCI (human computer interface), and network or community connection. Below is a text based flow diagram of my first thoughts of how such an instrument could operate. There are three modules that interconnect, and any module from 1 to 3, the instrument could be used as part of a creative solution
tactile
pick up based electro/acoustic interfaces and surfaces
- mono/duo chord- double bridged with pickup(s)
- ridged/comb surface to run finger pick across
- Contact mic(s) buttons/pressure switches/keys
sensors
- Slider potentiometer
- Spring/flex sensor
- Pressure sensor
- Air flow/breath sensor
- Llight/distance sensor (short range) - virtual theremin
- Camera
- Movement sensor
Data Interface
The nature of the different properties of the instrumental interface can be utilised into unique ways. The acoustic properties can be retained and amplified as they are. Such as the mono/duo chord - this interface has the potential to output its acoustic properties directly, as does the ridged/comb surface or any other touch based interface using some sort of pickup. Here the instrument becomes a combination of acoustic and digital outputs. The acoustic properties directly output and amplified, and the digital going through data streaming and processing to be converted into sound output. The data Interface is where the output from the instrument and the data that is created is streamed, analysed, and converted into results that can be applicable to driving sonic solutions that are then output and sound.
Many solutions exist and are constantly being developed, my experience to date has been using Arduino circuit boards and sensors. These have been been connected to a computer for the software interface. In the future I will be looking into things such as raspberry pie etc to create standalone instruments, along with progrmming directly onto the Arduino or similar type circuit board.
HCI Software Processing
Again many solutions exist, and I need to do a lot more exploration in this field. To date my experience has been using Pure Data, MaxMSP, and Processing as well as the Arduino programming software. I have started to look into Sonic Pi and Supercollider. I will be looking to further develop my understanding and knowledge of using more code based applications to be able to compare and see if as a solution they offer more stability, or flexibility, especially in regards to developing a standalone instrument without the need for a separate computer to interface to.
Network Performance Integration
This is a constantly developing area for synchronised, or rather because of latency, non-synchronised creative performance. It is an exciting area for connecting communities that would otherwise face large obstacles the financial and/or physical to connect, collaborate, and share their creative practices. There is also a movement towards using this technology to reduce the need to travel because of the impact that has on the environment. Of course in the most recent times it has come essential due to isolation board about because of the coded 19 pandemic.
The goal or proposal of this integrated instrument would be to have a system that would connect directly in some way to connect, and connect in a way that is efficient to reduce latency as much as possible. I have been experimenting so far with streaming data rather than audio through a system, and this is one possible solution to explore further. I will go into more detail on this in another post
Streaming performance(s) solutions and options
Currently I have been exploring two different approaches to a creative practice using a network performance solution. The first is a User to Remote system. Here the user processes the remote performers input - audio monitored one way via round trip back to the Uuser from the Remote. Here the audio is heard individually from each end and does not rely on the interactions responding to audio latency through network inconsistencies
The second is something I will term a User to User system - collaboration of audio and or data streamed/received from both users synchronously albeit with latency.
Just for this post, and just for now there is a basic breakdown below of how I envisage or wish to experiment further with the research project for this solution
- data/audio streaming software - to stream data from the instrument into the network system
- performance network connection - this would include the interface for the performance network, such as source-connect now or jamulus. There are many systems being developed, and I hopefully will be able to reference more of these later.
- Data connection between performers. Currently I am using a separate VPN that can connect one or more computers directly so they can share data through port sharing. Ideally this would be something integrated within the network performance connection.
- Audio streaming, this could either be one-way streaming from the remote performer back to the user,when the user/remote system is being used, or would need to be streamed as per usual with the current network performance connections or solutions, where audio is shared from both or all performers
Diagram of user and remote Network Performance
This diagram shows how the user patch and the remote performer patch connect, stream data and audio between each other. The data streams from the user to the remote patch, and the audio subsequently streams from the remote patch to the user patch. The performer inputs sound to the remote patch, and here is the result of it being processed within their own system. The resulting sound from the processing and the unprocessed sound streamed back to the user. So the audio was only streamed one way, and the data is only streamed one way.
NETWORK send and RECEIVE MaxMSP Patches | |