The software GestureMapping [version 1.1.1], developed in Processing environment by Project Marginalia's team, is responsible for interpreting data for creating a projection on the space of the installation planned for the project.
The visual/experiential interface of the installation consists, superficially, of a projection over which the spectator is able to interfere by using illumination devices; and torches which will be supplied on the installation to be used with this intention. The processing structure is composed of interconnected equipments responsible for capturing and exhibiting video in real-time, making it possible for the spectator to interact with the system.
In order to capture video in real-time, a camcorder that records a specific area of projection is connected to a computer through FireWire port; a computer mediates the interpretation of received data using the software GestureMapping [version 1.1.1], which processes images and merges real-time video with a video loop; finally, a projector projects the result of the software intervention in real-time exactly over the same area which is captured by the camcorder.
On this process the interaction between all the elements of hardware is mediated by the software GestureMapping [version 1.1.1]; its functions are:
- to receive data from the camcorder;
- to interpret the captured image following predefined parameters;
- to merge the visual data received from the camcorder in real-time with frames of a video loop;
- to output a real-time video projection.
At first, the software creates a set of values [array] which has a number of elements that equals the total number of pixels of the captured video, in order to index each element of the array to a specific pixel of the captured image. The software, then, interprets each frame of the video defining the elements of the array based on the brightness of each pixel according to a predefined threshold value. The result of this comparison between the brightness of the pixel and the threshold value [greater, lesser or equal] determines which mathematical function is to be used to modify the values of the correspondent element of the array. If the brightness of the pixel is greater than or equal to the threshold value [if the spectator illuminates this area], the value of the correspondent element of the array increases; following this logic, if the value is lesser than the threshold value, the value of the correspondent element of the array decreases [multiplied by a factor which has a lower value if compared to that of the first expression].
This procedure is repeated for each pixel of the image. Once every pixel is analyzed, the values stored in the array are transposed to the pixels of the image that will be merged with the video loop, in order to determine the brightness of each of this pixels in a scale that ranges from 0 [black] to 255 [white]. Once completed, the procedure is initialized again for the next frame of the captured video, maintaining the values which were stored on the array during the analyzes of the frame which will be employed on the operations of the next frame. By transposing the values stored in the array to the pixels of the image following these premisses, a trail is produced and gradually fades – according to the incidence of light on specific portions of the image.
The image which was produced based on the data stored in the array is, then, merged – using multiply – with a hidden video loop. This image serves as a mask that defines visible and invisible areas of the video. Merging the video to the mask with the multiply method, allows the black areas to remain black making it impossible for any layer that lays behind the mask to be viewed, while white areas define the opacity of the layer, making it possible to see the video on this areas.
Finally, the projected image, after being processed by the software, is updated completing the cycle.