Memory Horizon is an immersive media installation that captures the nature of human memory in 3D space using visual, audial, as well as motion interaction. There are five main features required in the installation. The Kinect located at the front-center of the space detects user's motion; in reaction to the inputs, screen effects are displayed, lights get brighter, and clock gears move faster with audio effects raises its tempos as well.
The design starts with a general storyline and message we are trying to deliver through this experience. Then, we mapped out the features and each interaction map. The major input sensor of the experience is the motion capture through Kinect. Through this channel, there are four interactive outcomes (D2).
There were several iterations on general layout. The first version, we had a constraint on budget, so no tv was eliminated. We came up with the plan to use the best out of the space to create better impact. The screen provides the memory particles, but the concern was still remained uncertain with the central objects - for several iterations, we can up with the idea to use clock as the means to deliver the core idea of timelessness of memory.
Interaction design is composed of these elements below. User/audience should be immersed in the experience, therefore multiple interaction modals are used to create a sense of seamlessness.
The main object in the Memory Horizon is a mechanical clock that works with a combination of springs and gears without electronics. Stepping motors A and B are installed on the two wheels moving the clock needles. The motors are attached to the central axis of the wheel and rotate at one speed each gear wheel is achieved by rotating each gear at a single speed (rpm value). As the viewer approaches the clock, the motor swivels at a higher speed and, when away, at a slower speed.
Based on the mechanisms of Kinect, various interactions are developed within the viewing space of the installation which uses both of position (dx) and depth (dz) values.
The change in horizontal position (dx) input is mapped to the size of the screen, and this value is constantly reflected in the X coordinate of the position value of the panel. Also, the depth value is important inputs in the installation. The depth change (dz) of the target is the Z coordinate of the particle sphere displayed on the screen, the illumination of five LED lights, and tempos of the sound effects.
Light & Sound
In response to audience's real-time location values(dz values), the screen, lighting, and sound display the following interactions effects. To maximize the immersive experience, not only the screen but also the lighting and sound reflect the Kinect information and demonstrates real-time interactions. Five LED Strip lights are installed at the bottom of the circuit controlled by Arduino boards. Also, Processing and Arduino are connected to each other with serial communication method. The depth value of the viewer in string is uploaded to the board in analogue (AnalogWrite), with the overall LED's light intensity illuminating in real time. Clock sound tracks are also controlled by user location, and this interaction uses the sound-library provided by Processing.
On the screen, real-time images are printed with a visual representation of particles. Each of the three spheres are composed of thousands of particles surrounding the core sphere. The sphere dimension interactively changes in response to audience's position values, using a horizontal position value (Ld) and a depth value (dz) detected by the Kinect. The horizontal position values are mapped to the screen size, which determines the X coordinate of the particle. The depth value to the camera depth value allows viewers to experience a sense of immersion as they get closer to the sensor.
media art installation
seoul, south korea