Notes About the Process

In this 4th installment of the "synesthesias" we close the circle on an exploration begun back in 2016. This was the search for an artistic process aimed at sustainably expressing an experience of synesthesia intermingled with daydreaming. Just like all the previous ones, this one too can be watched as a flat video or as an immersive VR experience. The soundtrack is also available on all the major music streaming services.

On the Soundtrack

The Book of Tea is an actual book, written by Kakuzō Okakura and published in 1906. More than a century later, Clive Catterall recorded an audiobook version of it which I later found on Librivox, published through a Creative Commons license. The philosophical messages of the book struck a chord with me. Meanwhile, Clive Catterall's incredibly soothing voice, reminiscent of the many recordings by Alan Watts, just seemed to deserve a psytrance drop at the end of every sentence. I couldn't resist, and so I took note of many timestamps in the audiobook I felt would be nice to include, then narrowed down the selection to a few that I felt captured the experience of art that is so dear to me. Eventually I strung them together into the arrangement I hope you will enjoy listening to as much as I enjoyed making.

Another note I would like to include is a huge thanks to Lost Memory (Fred Torres) for his amazing work on mastering this track. We crossed paths as he was performing right after my visual Dj set at Secret Psychedelica Aquarius 2023 and I'm grateful we stayed in touch since then to eventually work together on this.

On the Visual Experience

In order to explain the motivations behind this video, I think a recap is in order:

  • Synesthesia #1 was produced by exclusively using Blender and python scripting. One set of scripts was responsible of ingesting MIDI and wav files, and producing animation envelopes in response. Another set of scripts would read these animation envelopes and use them to procedurally populate a Blender scene frame by frame. Animation envelopes were inspected and previewed via matplotlib.
  • Synesthesia #2: this one was produced largely through the same process as its predecessor. There was however one major improvement made by plugging the first set of scripts into a GUI built in Unreal Engine, which let me build and preview the animation envelopes through a visual flow (which also synced them with the soundtrack). The scene would still be built procedurally in Blender via python scripts reading said envelopes.
  • Synesthesia #3: the third installment involved a major departure from the original process. In this instance, seamless loops are made and rendered in blender completely independently from any song. This allowed me to begin exploring intermediate levels of detail in the representation of sounds, while reusing visual material. Lower levels of detail can more easily visualize music tracks for which the MIDI data or stems are not available, and the reusable visual material can bend to visualize different tracks very rapidly. I ended up building a second GUI responsible for reading animation envelopes (produced in the same way as in Synesthesia #2) and using them to blend multiple layers of multiple animated loops, as well as applying effects. The two GUIs together effectively form a procedural video editor which I then used both for visualizing First Flight (with a higher level of "synesthetic detail") as well as entire DJ sets (with a lower level of "synesthetic detail") which I then started performing on Twitch, YouTube and at IRL events.

While I was satisfied with the resulting video for First Flight, I felt it was missing a very important element that was present in the previous two videos. While it did have the fly-through landscapes with elements that moved to the music, it now completely lacked any visual representation that actually resembled the effects of synesthesia. For this reason I set out to make the 4th video, in an attempt to explore ways to bring those elements into this new, less visually flexible process.

I started introducing abstract-looking elements in the loops I was making, or even building scenes around this idea, like holograms-projecting rides through a futuristic Egyptian landscape. These holograms would be grounded in the environments they are rendered in (where their light can be observed bouncing on the statues and lighting up the walls), but they could also be extrapolated from their context and used as abstract overlays in any other loop. They could then be used to inject representations of synesthesia throughout any music video, regardless of the 3D environment being used.

This cross-pollination between different environments would of course bring some potential issues. The composition of the image might not always work (not all the environments have the horizon centered at the same height, or the same tilt in the camera, etc…), and the VR version of the video might have to fight with conflicting stereoscopies (stereo-clipping) or just a break of the immersion due to the higher resolution of the medium highlighting the disembodied nature of the holograms in environments where they don't belong (after all, somebody once did describe VR as "headphones for video").

Did all these elements eventually coalesce like Lego bricks into a new Synesthetic dream? Or did they clash into a cacophony of lights? I'll let you decide, I enjoyed the journey either way.

- Synwrks