#eye #eye

Collective minds/bodies



Project statement: The outcome of any generative AI (that use large language model) is a diorama of any language.


In this project, I want to explore how we communicate and what is language. I think language is any system that we agreed upon to communicate our intentions to another. For example, we agreed on the five letters “water” to contain the meaning of the clear liquid we have to consume in order to live. Similarly, we agree on certain movements of our body to communicate friendly greetings like waving our hands. I came to think about the topic of language when I use Chatgpt. So called “large language models” are actually betting machines that predicts what comes next when there are certain words in front. Llms trained on back propagation algorithms put weight on different words and use the feedback loop for a better prediction each time we train them. Thus, the outcome of any generative AI is the most probable output combined. It flattens out perks of real creativity and unpredictability (which is the nature of creativity). The output is always a safe bet, because it merely reflects what is being weighted the most in all the data available. But is the database unbiased? Are those nodes that weight less not representable? In addition, does the pattern of outputting something a system that already agreed upon millions of times sounds similar to the language system? We, as individual humans, have distinctive experience towards the world, but when we express it through any representational systems like language, we cannot express the characteristics that cannot be captured by the representational system. When we speak language, we concede to a widely agreed-upon system that flattens out perks of real creativity and originality. That’s why some people convert to write poems that does not necessary convey everything by language alone, but also create an environment for others to have their own experiences. AI is a diorama of the language world we are having, and through this project I want to create an environment that allows the audience to reflect upon how we communicate and what is language.



In this project, the audience will pass around a ball with sensors, and the sensor data will be the input to a MAX patch for generating music (analogous to llm models trained on available data). I will be dancing to the music with specific moves maps to the specific sound (analogous to the backpropagation algorithms that transform the data in some ways, and give out the most probable outcome). So it is a visual movement representation of how unpredictability and creativity is removed with the process of AI.


(image made by DALLE using“blade runner style of collective minds and bodies for music AI”)

(image made by DALLE using“blade runner style of collective minds and bodies for music AI”)


Performence documentation: 



Ball ideation:



(image made with DALLE using “cyborg holding a ball with physical computing modules in it and sensors on the surface for performance in live music venue in blade runner 2049 style”)

Process blog 1:

trying to send arduino sensor data to the computer wirelessly. 



Process Blog 2: 

Got the Max patch working. Reading the accelerometer data (X,Y,Z) from Auduino Nano 33 BLE from wire transmission. 

Arduino code

Max patch






Process Blog 3: 

Be able to wirelessly transmit the data from the Arduino BLE33 to Max patch.



Arduino code 


Max patch



Process Blog 4:

I found out that the bluetooth range for Arduino Nano 33BLE is only 1 meter (3 feet), so I am considering the possibility of using WIFI transmission, or Radion, or a module that have longer bluetooth distance range on top of the Nano 33BLE board. I am trying Arduino Nano 33IOT board, which is a WIFI board right now, but haven’t figured out yet. At the mean time, I was in Trona Pinnacles in California shooting a dance film about aliens, and after I flew back, I also performened at a gallery opening (https://www.eventbrite.com/e/inter-rituals-between-materiality-and-performance-opening-night-tickets-588703476947).


Process Blog 5: 

I put a plug-in synthesizer to the MIDI, and I’m currently working on adjusting the synthesizer to achieve a specific style, that of the Blade Runner 2049 soundtrack. This involves experimenting with different settings on the synthesizer, such as adjusting the waveform, filters, and modulation, to replicate the distinctive sound of the soundtrack. This process requires careful attention to detail and a good ear for sound design, as small adjustments can make a big difference in the final result. Once the desired sound is achieved, it can be used to create music or sound effects that fit the desired mood and style of the project.



Process Blog 6: 

Successful switch the board to Arduino Nano IOT33. This means that I can significantly expand the range of sending and receiving data. Furthermore, the board has been connected to my iPhone hotspot, which allows the board to communicate with other devices with the range of WIFI connection, much more than the Bluetooth connection with BLE33 board. 

I also successfully connect the Arduino data to the Max patch. And it allows me to create any desired output. While still working on the sound output, I am also researching on extending the output channel to data visualization and DMX light.

This week’s achievement represents a significant milestone in the development of the project, as it demonstrates the successful integration of hardware and software components.






Process Blog 7:


I want to create a more intentional appearance for my ball, so I decided to either laser cut or 3D print a ball structure. I found a pattern on Thingiverse (https://www.thingiverse.com/thing:2026199), but because of the different thickness of the material, fabricating this icosahedron is a long process. There are a lots of errors and reiteration along the way. For example, I found that some acrylic boards have different thickness on one end than the other. But I am happy with the outcome.




Process Blog 8: 

Another new edition of Max patch that has beats. The faster the angular accelaration, the denser the beats.  



Process Blog 9:

The showcase! Many unexpected technical difficalties happened before the showcase, including the unstableness of the WIFI board, fragility of the icosahedron, so I improvised a lot to fix everything on site. Here is the audience interacting with the ball: