Arrival
"Arrival" is a real-time interactive installation based on face tracked. It visually simulates the communication and connection between metacognition and the universe during meditation, through tracking face of the audience to help the audience feel less stressed, be more focused and well-being.
produced by: Yundan Qiu
Introduction
The initial objective of this project is to resist the anxiety and loneliness during the pandemic period, and transform the mental pressures to a positive attitude. Activities related to Zen such as meditation and yoga are proper approaches to help people keep sane. To beginners, meditation might be a little lonely, reticent and daunting. Their inner perceptions and attentions are easily interrupted by ego consciousness. This project visualizes metaphysic connection with universal elements to help audiences understand the core concepts and procedures of meditation, thus making it easier to practice.
Concept and background research
Eastern philosophy (Hinduism and Buddhism) believes “ego” is merely constructing by self-consciousness, which is void and unnecessary. Meditation is an effective way to dissolve the ego, connecting with the cosmic dimension. Through training and practice of meditation, it is possible to gain a deeper understanding of the ontology and self-release to relax the nervous system and mental pressures.
I separate the process of meditation into two stages, and display them through two interactive modes. The first mode is the beginning stage, the audience would percept more universal elements through meditation. The second mode is in post-stage, which shows the audience has beyond ego and mindfulness, all movements are in sync with the universe.
Technical
This installation is based on Webcam and OpenFrameworks.
Webcam is input to track face behaviours. Two programmes with five addons based on the openFrameworks to realize visual and audio functions, which communicate through ofxOsc. OSC sending programme records and sends face tracking data in the buffer, and receive programme converts and applies data into different groups and variables, trigging and outputting visuals and audio effects respectively. ofxOpencv and ofxCvHaarFinder detect and track facial movements. ofxMaxim synthesizes real-time audio. BGM influenced by the acceleration of particles, and another audio will be triggered if the distance between two or more particles is closed enough. ofxGui monitors the values of particles moving factors, also provides the button of switching interaction modes. ofLight was applied for dynamic lightings with different hue (colours) to create better visuals.
In the first mode, the audience face has to focus and keep still (meditation), so I check optical flows in the face area to see if the audience was moving. As the face remains still, an increasing number of the universal particles can be perceived, symbolizing deeper connection with the universe. If the face moved or disappeared, the part of the perceived universe will stop. In the second mode, the universal elements follow face movements of the audience (controlled by four factors: acceleration, separation, alignment and cohesion), representing that through meditation, the audience has beyond ego, behaviours of the audience are in sync with the universe.
Self evaluation
I'm generally satisfied with the project. In this project, I applied a lot of interactions learned in the second semester, also combining visual expressions learned in the Computational Form and Process module. It realizes all functions of two interactive modes and works robust, also has enough feasibility to communicate the theme to audiences. The only dissatisfaction is the way to switch interactive modes, which can explore more smooth approaches rather than manually selecting.
Future development
One further development of this project could be exploring diverse universal visualizations and interactions with the audience behaviours such as considering how to trigger more controllable changes of universal particles (position, velocity or size etc.) by tracking facial changes of the audience in the second mode.
My other idea is to add brainwave as data input. Brainwave is a field that closely related to metaphysics and universal matters. In the first stage, more various visuals could be created converting from the change of brainwave when the audience meditates. Besides, brainwave data can also be used as a medium to control two meditation stages.
References
Andy Lomas.“particleLab7” coding reference from Computational Form and Process Week 10.https://learn.gold.ac.uk/course/view.php?id=12881#section-8
Foreman C. (2016, March 20). The Five Ego Traps To Avoid in Meditation. UPLIFT. from https://upliftconnect.com/five-ego-traps/
Goldhill, O. (2018, June 17). People’s egos get bigger after meditation and yoga, says a new study. Quartz. from https://qz.com/1307380/yoga-and-meditation-boost-your-ego-say-psychology-researchers/
Lewis Lepton. “openFrameworks tutorial series - episode 055 – ofLight.” https://www.youtube.com/watch?v=Amfr-MY96W8
Theo Papatheodorou.“OSC Send” coding reference from Workshops in Creative Coding Week 15, Osc Message. https://learn.gold.ac.uk/course/view.php?id=12859§ion=16
Theo Papatheodorou.“OSC Receive” coding reference from Workshops in Creative Coding Week 15, Osc Message. https://learn.gold.ac.uk/course/view.php?id=12859§ion=16
Theo Papatheodorou.“ opticalFlow” coding reference from Workshops in Creative Coding Week 13, Computer Vesion (part 2). https://learn.gold.ac.uk/course/view.php?id=12859§ion=16
Theo Papatheodorou.“ faceDetector” coding reference from Workshops in Creative Coding Week 13, Computer Vesion (part 2). https://learn.gold.ac.uk/course/view.php?id=12859§ion=16
Theo Papatheodorou.“ AudioVisual” coding reference from Workshops in Creative Coding Week 18, Audiovisual programming(part 2). https://learn.gold.ac.uk/course/view.php?id=12859§ion=16