• emodo

    The App That Reads Your Emotions

    A speculative design project demonstrating a future technology that will allow you to communicate with friends and family while understanding their emotional state.

  • Name

Proposal

I would like to design and implement a new way for people to emotionally communicate with each other, a new way that communicates more from the heart. To achieve this, I will develop an iPhone app called Emodo that will analyze certain parameters of the communication (such as heart rate from Apple Watch, speech volume and patterns, the chosen vocabulary, etc...) and then attempt to determine the sender's current emotional state.

Emodo will expand its knowledge through machine learning by collecting data and accepting feedback from its users. For example, after Emodo's analysis is performed, the user is asked to reveal their true emotional state, which is stored and compared to Emodo's hypothetical result. This will allow the emotion recognition algorithm to improve its success rate over time. 

When using the Emodo app, anyone will be able to know exactly how the other person feels during every point in the communication. Emodo will encourage deeper friendships and build stronger relationships.

Concepts

Sources of inspiration include:

• 20th century studies by Paul Ekman - matching faces with emotion content

• Television show Lie to Me - truth and deception are leaked onto the face


Sources of research include:

In reading facial emotion, context is everything, Association for Psychological Science, 2011

Inferring User Mood Based on User and Group Characteristic Data, US Patent Application by Apple Inc., 2012

clmtrackr (constrained local models tracker), a javascript library for fitting facial models to faces in videos or images, Audi Oygard, Github, 2012-present

Emotiv - Scientific contextual EEG devices and data

Technical

To successfully execute this project, I will be required to integrate several technical aspects, such as:

• iPhone app development (Swift) with WatchKit integration

• Integration with existing open source voice and facial recognition libraries

• Database to store historical data for analysis

• Rudimentary emotion recognition algorithm (must work well enough to demonstrate speculative concept)

• Development of an aesthetic design to represent the speculative interfaces

Timeline

                       January 14, 2016  -  Initial proposal presentation

                       January 15, 2016  -  Two-week research phase

                       January 28, 2016  -  Second proposal presentation

                       February 1, 2016  -  Four-week development phase

                       February 29, 2016  -  One-week testing phase

                       March 7, 2016  - Week 10 presentation

                       March 24, 2016  -  Final four-week development phase

                       April 25, 2016  -  Final testing & usage phase

                       May 16, 2016  -  Edit video documentation

                       May 30, 2016  - Week 20 presentation and paper

Made with Qards