Jeff in a Jar - Prototype 1.1
Ernie Lafky
Concept
Jeff in a Jar imagines a dystopian future world where the space colonisation dreams of Jeff Bezos have come true. His acolytes worshipped him like a demi-god and had his brain preserved and kept alive in a vat. Though the inhabitants of the space colony have long since perished, Jeff’s brain lives on, still thinking he’s in control of a vast galactic empire. The arc of this piece is meant to start with Jeff in absolute control, then deteriorate toward chaos. In the spirit of The Twilight Zone, Jeff re-lives this nightmare over and over. He is ever hopeful that he’s back in control, only to slip back down into hallucinatory madness. This particular iteration is a prototype for the first half of the piece, when Jeff is still in control.
Visual Aesthetic
Science Fiction from the 1960s and 1970s is the primary inspiration for the aesthetics. I wanted the user interface to blend Minority Report with Star Trek. Jeff Bezos has been a huge Star Trek fan since childhood. He even appeared as an alien in an episode of The Next Generation, so I thought it appropriate that Jeff’s ultimate dream would be controlling the world using a futuristic, Star Trek design. Looking at the recent movies by J. J. Abrams, for example, the UI tends to use a lot of blue wireframes. Using gestures to control the machine was inspired by the UI in Minority Report.
Interaction Design
This section of the piece is meant to represent Jeff at work controlling... something. We’re not exactly sure what, but it’s meant to look technical, futuristic, and difficult to use. At the same time, I needed some intelligible feedback from the UI to help me perform. So I tried to strike a balance between usability and mysterious abstraction.
Although all of the objects behave like virtual trackballs, the functionality can be broken down into two categories: buttons and sliders. For the buttons, the user needs to roll the object past a predefined threshold in order to trigger an event. With the center box-like object, for example, a swipe to the right triggers a single sound to play. In order to provide visual feedback when the threshold has been crossed, I had one of the planes flash to a light colour.
One of the problems with using these virtual trackballs as buttons is that it’s difficult to know where in the rotation the object is once it’s been used. Is it before the threshold again? Or is it past the threshold? How do I know how far to turn it? Again, I didn’t want to be too explicit, so I had the zero point reset every few seconds, to ensure that I was never far from the trigger. I also had the light color remain lit until the zero point was reset, so that I would know whether or not I could trigger the event again.
An example of a slider is the globe-like object on the right. Rotating this object right and left controls the variance. In this piece, the variance is defined as the maximum amount that one of the long slow sounds can change. While it’s easy to hear adjustments in frequency or beats per minute, variance is more intangible. Which is why I tried to make the amount of variance explicit in the visuals. As the globe is rotated to the right, more latitude lines are added. If the variance is at zero, then there is only one latitude line. The number of lines roughly correspond to the amount of variance.
Video
Future Improvements
I’m not terribly happy with this piece, perhaps because it falls well short of telling the story I wanted to tell. While I like the concept behind Jeff in a Jar, the amount of visual storytelling required to make it clear was beyond the time and skills I currently have. In short, it was much too complex for an end-of-term assignment.
Another issue is that I worked on the sound first, so the audio drove the visuals. This workflow is the reverse of what’s typical for a visual narrative, where the visuals are created first, and the audio enhances the visuals. To make the interactivity work, I spent my time creating a UI for the audio. Were I to continue development on this piece, I would make the graphics more spectacular and use more of the screen.
My original intention was to use a Kinect depth camera for the interaction, but I was worried that the Kinect would add too much complexity on a tight deadline. So I switched to my laptop’s built-in camera and used optical flow detection to control the objects. While this decision simplified development, it forced me to put all of the UI elements at the top of the screen and the performer at the bottom of the screen. Had I used a Kinect, I would have been able to appear behind the objects, similar to Memo Akten’s “Webcam Piano."
References
This piece built upon Dr. Theo Papatheodorou’s in-class sample code at Goldsmiths University.
- Optical Flow
- oscSend
- oscReceive