This was a speculative design fiction reimagining our relationships with our phones as one to an emotive AI. We implemented a mixed hardware and software platform to explore phone functionality as if it were AI driven first and able to take in more varied input (other sensors, not just touch screens) and have more nonconventional output (nonverbal communication through expressive motion). Currently funded by a research grant through Google's Creative Lab. The first live demo debuted at CMU's exhibition "Where are the Humans in AI?" (our answer: 'on their phones').
A few media outlets reposted our initial concept video, and Google's creative lab caught wind of it. They invited us to present, and afterwards donated a 15k grant to our school to fund our work. Our second round in the spring of 2019 focused on building a more robust animation and CV system, exploring more interactions, and making our work public so others can use it for their own prototyping purposes.
Human and AI interactions was the last theme from our junior spring semester design studio. This is where we left off with Emoto in the spring of 2018
MY ROLE | Concept artist, and Interaction Protoyper
Lots of Keynote
Lucas had helped me on a very similar technical project before, and Gautam was my co-author on a different paper so I was very excited to work with them again.
At the time I was fascinated with the social/interaction problems with IPA's, the polarized humanization/abstraction in the form of home assistants, the attention economy and rising efforts to control our addiction to mobile smartphones, and wow I could keep on going — but for this project I pushed it as a provocation towards reframing our relationship to phones, and expanding the nonverbal communication methods + expressiveness of an IPA through animated motion and coordinated hybrid digital-physical interactions.
The original studio project was only a few weeks, so I spent my time story boarding, drawing out forms (eyes and body), and coding the digital interface/iOT backend for Emoto to at least communicate the hybrid digital physical concept of an AI sidekick that comes to life from your phone.
The second time around we shifted the technical roles a bit. Gautam took over developing on device, making a native REACT app for the phone. I worked closely with him to develop the robot's animation system while creating the animated eye assets in Aftereffects.
Better WOZ control system, more CV options, higher fidelity animations.