FeedIndex
An Augmented Reality Tour

Using the augmentation enabled by audio prompts and small visual markers, the California Landscape Portal enables people to experience different physical locations using mostly their imaginations. The audio guide describes certain places and scenarios but no directly related visual information is given.

This project pushed the boundary of what augmentation could be by removing some sensory elements while explicitly guiding the overall process to a certain extent. The tour being situated in an austere white room forces the tourist to build the landscape within, giving each person a distinct experience.






At the end of the tour, viewers are invited to take a booklet of abstracted images correlating to the location descriptions they heard.



A futher iteration of this project includes similar audio description based on photographs. Rather than the tourist not having visuals to look at, their experience is enhanced by a booklet of google streetview images of the locations being described. The juxtaposition of the personal, human perspective illustrated in photographs being described confronts the rigid and homogenized streetview camera perspective.
A precursor to California Landscape Portal.

In this exercise with point-of-view augmentation I wanted to explore the possibility of a technological reality-augmenting device, akin to Google Glass, that would allow a person to augment their surroundings through simplification. What if instead of adding information to one's sensory experiences, certain details could be removed?

By using an overlay of abstracted illustrations of the surroundings and reducing the sound levels to a low hum, the two scenes show possible scenarios where visual and auditory simplification could be desired.





Transmedia, Spring 2013.

In this collaborative project with Zoe Padgett, we were interested in the small, seemingly inconsequential daily exchanges all of us take part in.

Wearable technology enables us to not only quantify these fleeting transactions, but to hold and carry them with us. Perhaps we can feel the weight of a long winded sentence, or refer back to a recent nudge. In this project, we investigated the possibility of making conversations physical and visual experiences.



Through the design of a wearable that signifies when we have a desire to speak, and visualizes the process of expelling our thoughts, we hoped to explore how our communication can be affected by visual awareness and haptic reminders. With a small PC fan, a microphone audio sensor, a light sensor, and an Arduino microcontroller, we created a suit that inflates and deflates with our speech.



By performing brief explorations of different scenarios we were able to experience some of the new social dynamics introduced by the suits. Wearers had to be very mindful of the other person when they themselves had something to say. We added a detachable piece that only allowed the suit to function when attached to the shoulder, thus functioning as a "talking stick" element further mediating the conversations.








An informational booklet of the history of United States Immigration Legislation and its cultural and social implications.



Information Design Final, Spring 2013.
A collaborative precursor to Expanded Discourse, this exploration focused on the increasing trend of quantification and sharing of personal actions and habits through data collection via wearable technology devices. We sought to develop a means of quantifying brief, daily interactions that are not normally considered and through that, we hoped to bring to light how our behaviors could be influenced by a wearable.

The Luster shirt contains an LED that lights when the wearer is touched and remains on for a predetermined amount of time. The LED allows the wearer to visibly display their social interactions and the interactions are collected as "touches" which can later be accessed via a mobile application.





A collaboration with Zoe Padgett, Marcus Guttenplan, and Jenny Rodenhouse.

78 Domestic Disputes, Don't Tell Susan, Scenes From Firearm Safety Monitors: 0-160 Decibels attempts to raise questions about the experiential implications of a "smart" city where gunshot sensors are ubiquitous. The video looks into the people at the end of automated network of sensors and computers who listen to varying categories of audio based on decibel range.

Inspired by a recent New York Times article, we wanted to address the complexity of privacy and safety issues in smart cities by focusing on the people analyzing the sounds of the city in real time.








____________________________________________________________________________________
 
  Getting more posts...