Student participants from the UCLA school of Theatre, Film and Television, developed a prototype for a mixed-reality pavilion:
The subject: Los Angeles 
The context: 2028 Olympic games
@LAs consisted of three interactive exhibits which used contemporary artificial intelligence as a tool for social impact storytelling.
Using data extracted from audience members, the pavilion was able to respond to its visitors in real-time.
techniques​​​​​​​
ARTIFICIAL INTELLIGENCE + MACHINE LEARNING
Amazon Web Services (AWS) machine learning algorithms were used to assign different media with folksonomy tags, based on what they, the algorithms, determined said media to be. Upon entry, audience members completed a survey on their relationship to Los Angeles, assigning them their own folksonomy tags. Thus, the pavilion could identify which media related to which audience members.
@LAs also used media that had been synthesized entirely by an artificial intelligence. Exhibit teams scraped the web for datasets relevant to their goals, and used them to train machine learning algorithms. Those algorithms were then seeded with audience folksonomy tags at runtime, and the resulting AI-authored content was displayed.
HOST-INDEPENDENT MEDIA ACCESS VIA CLOUD STORAGE
Scraping, sorting, and storing media for massive data sets already presents some challenges. On top of those, @LAs needed a storage system that allowed for ongoing updates to the dataset, and that made all stored media available to any machine in the pavilion at any time - even simultaneously.   
The solution: Store all media in the cloud, and let each machine query what it needs. Use prefetching and local caching for seamless playback.
All @LAs media was saved in AWS cloud object storage, and an AWS Dynamo Database tracked the folksonomy tags used to make queries. All show machines had nonstop access. Database updates could be made during the show and be reflected instantaneously.
REAL-TIME RENDERING IN TOUCHDESIGNER
Derivative’s TouchDesigner is a dynamic media platform/visual coding environment which combines a pallet of tools for making data-driven art with a powerful real-time rendering engine.
Rather than display pre-rendered content, or trigger pre-made cues, TouchDesigner is designed to continually render as the show goes on.
For @LAs, this allowed the exhibits to be reactive - input from user interaction or updates to the AWS cloud databases meant entirely new content would be generated and displayed. 
The pavilion could listen and respond to the viewer in real-time.
vectors + folksonomies
to map which tags an audience member had been exposed to during their visit, we adopted a system of 'vectors,' overseen by an AWS Dynamo database. 
EMISSION VECTOR
- Folksonomy tags of all media being displayed in each exhibit
- Published by each exhibit once per second to AWS notification services
EXPOSURE VECTOR
- Reconciles emission vector with knowledge of audience location to describe what tags a visitor has been exposed to over time
- Considers both tags from exhibit emission vector, and tags of other visitors in the space, i.e. tag exposure is both through media and through other visitors. 
IDENTITY VECTOR
- The tags associated with each audience member
- Static. Calculated once from onboarding survey and does not evolve.
exhibits
'm/UR/al' 
LA murals painted by an AI
Concept: Use the Los Angeles Mural Conservancy project as a training set to for an AI capable of synthesizing new “Los Angeles murals” when seeded with user’s folksonomy tags. 
Each LAMC mural would be tagged according to its content/theme (i.e. religious, political, historic, etc.) A conditional generative adversarial network (GAN) would be trained on these tagged images.
Entering into m/UR/al would send each audience member’s own folksonomy tags (their identity vector) as parameters for content/theme. The audience could watch the synthesis of a new “Los Angeles mural,” whose subject was their identity
 If genuine LA murals bear the subtext (or, perhaps, metadata) of their communities, m/UR/al seeks to interpret its viewers in the same way.
What is the @LAs Pavilion community mural? What can it tell us about that community?
activateLA
A corporeal approach to data navigation
Concept: Three “lanes” of media, projected side by side. Far left, a real-time readout of the most powerful folksonomy tags in the space. Center, scraped images whose metadata included said tags. Far right, synthesized song lyrics, trained on “songs about LA,” seeded with aforementioned tags.
To navigate this nebula of media, users simply navigate around the space. OpenPTrack, an open-source, multi-camera tool for person tracking, is used to inform the system of user position, and influence output accordingly.
By moving around, users can pause or reverse the scroll of media, enlarge images, hilight favorite lyrics, even select tags to be emphasized.
Making discoveries through movement within the exhibit invites users to consider what they might discover by moving through Los Angeles. 
LA is a city defined by movement, or lack thereof. Awesome destinations but awful traffic. Social mobility versus social displacement.
The power to move is the power to explore.
Sobremesa
Generating a restaurant from the metadata of customers
Making discoveries through movement within the exhibit invites users to consider what they might discover by moving through Los Angeles. 
LA is a city defined by movement, or lack thereof. Awesome destinations but awful traffic. Social mobility versus social displacement.
The power to move is the power to explore.
Sobremesa uses restaurant recommender tools as engines for human connection.
The personalized setting reveals the commonalities in each group, and encourages bonding.
The exhibit seeks to catalyse conversation in the room, and suggests opportunities for connection after the fact.
Back to Top