Talking to water,
One of the things I like about my job at the Master Digital Design is the odd requests I get from students from time to time.
This time a group of students came to me asking if I could guide them on how they could talk to water.
But how does one talk to water?
And what does it even mean to talk to water?
The concept and execution of the project were done by Hieu Nguyen, Bo Németh, Ekta Gadekar and Viktoriya Marchenko. This article will go into some of the technical details of the project for which I provided guidance.
I will keep it straightforward and easy to follow, leaving out jargon until going down the rabbit hole.
For the non-technical aspects, I would like to refer you to the students themselves.

What we say to water can impact its crystals: positive words create intricate structures and negative words lead to collapse (Dr. Masaru Emoto).
AuraMotions is an art project that processes what we say to make different colours and patterns of water.
It applies sentiment analysis technology to detect emotions from what we say. Then, through MQTT, data is sent to TouchDesigner to create captivating effects on water.
How does one talk to water?
Well, that is an interesting question.
Like how most of my answers start with “I don’t know, but let’s figure it out together”, this answer was no different.
Luckily I am quite comfortable with the art of the bodge.
Making prototypes work just enough to convey a concept. No need for perfection, fault-proof code or future-proof solutions.
So, what is the concept?
Put simply the concept is to use sentiment analysis to detect the emotions of the words we say.
These emotions are then used as the input for generative art in the AuraMotions
installation.
The game plan
In order to talk to water the problem was broken down into:
- Speech to text — convert human voice into the words they speak
- Sentiment analysis — determine the emotions of those words
- Stream results — connect with
AuraMotions
1. Speech-to-text
We can not imagine our lives without the use of ChatGPT anymore. But did you know OpenAI has a forgotten little brother, especially after the release of sora?
Whisper is “an open-sourced neural net that approaches human level robustness and accuracy on English speech recognition”.
Some cool folks even build a python wrapper around the open-sourced model for easy, free and local use.
And just like that we have our speech-to-text working.
2. Sentiment analysis
Now that we have the text, we need to determine the emotion of the words. Are they positive, negative, neutral or …?
A tool I have been wanting to play around with for a while now was Hugging Face.
Hugging face allows you and me, as mere mortals, to use very sophisticated open-sourced machine learning models.
In our case we will use a text-classification model to determine the emotions of the words.
Like magic 🪄,
reasonably accurate sentiment analysis with a few lines of code.
3. Stream results
In architecture rooms this would be a hot-topic. Having multiple sessions to discuss the up- and down-sides of different streaming protocols. Calculating throughput needs, determine latency requirements and write-up reliability specifications.
But we are bodging things over here, we just need to send some data from one tool to another tool.
At the university we have set-up a MQTT broker to do just that.
Even though UDP messaging would probably have been a better fit for the job we used MQTT as it was already there, configured, and known to work.
And thats it.
We can now talk to water.
From here on out the students could use emotions sent by the MQTT messages to create any (generated) visual representation they need.
Magic with 30 lines of code
By standing on the shoulder of giants, we can bodge together our wildest imaginations.
Thank you random strangers on the internet ❤️.
Experience it yourself
Before going down the rabbit hole and I will lose you.
You can experience the project yourself or use it as the basis for your next bodge project!
Down the rabbit hole
Awesome, you made it this far.
That means you are realy bored or a super nerd, either way let us go!
Works on my machine
While the code presented above worked on my machine, and probably works on your machine with some technical knowledge, the bodged solution is not without its flaws.
While the project runs fine when all tools and dependencies are available, it was breaking down when the students tried to run it on their own machines.
Either (the correct version of) python was not installed or dependencies, like ffmpeg
, were not available on the students’ machines.

Docker to the rescue!
Mismatching versions and missing dependencies is a common and solved problem in software development.
We make a Dockerfile
and ship it that way.
Voilà, packaging the whole project neatly in a docker container will solve all our problems right?
Right?!

The final bodge
While docker solved the problem of dependencies, it introduced a new problem.
Audio was not being captured by the container, at least not on macOS machines.
We don’t have the luxury of running docker with --device /dev/snd
as you would on a linux machine.
After some googling I found a tool called PulseAudio which could ”[…] transfer audio to a different machine […]”.
This could be to a machine on the other side of the room, building, city, world or to a docker container running on the same machine.
To make installing PulseAudio
as easy as possible for the students, I wrote a small bash script.
So finally, the students (and you) can run the project with two simple commands:
./install-pulseaudio-for-mac.sh
docker run --net=host --privileged -e PULSE_SERVER=_HOST_IP_ xiduzo/whisper-sentiment-analysis:latest