Exercise 73 – Google recently developed a technology that can detect human emotions in a 2 X 2 dimension (energy level and body movement). What are some of the products you can build using that technology? List a few ideas, then choose one and design the product for that idea.
Just to reiterate, google developed a technology that can detect human emotions based off of energy level and body movement. I would ask clarifying questions as to how this technology works. Does it require special cameras? Does it require a user to actually turn it on? What’s the level of confidence in the results? Does the user need to opt in to this program? What kind of human emotions are we talking about?
Assuming that the user does need to opt in, but it works off of a normal laptop or phone camera and does need to be turned on, I would start brainstorming ways that this might help solve an existing user issue.
– This app might be useful when it comes to detecting fatigue and general tiredness when working. Google could offer a product that would remind users when to go for some air or take a walk whilst working
– Emotions is useful to gauge how a user feels about any particular product. This would be especially useful for both the user and the app to understand how to optimize UI/UX, product placement, etc.
– Emotions would be useful for Youtube, where it can better hone in on video recommendations than a simple Thumbs up or Thumbs Down.
I want to actually build for number 3, because a better recommendation system would be kind of amazing.
The goal of this feature is to deliver a better recommendation system for users who opt-in to this technology.
The way we know the recommendation is working is by longer viewing times on Youtube, a general happier user from the emotion technology, and better retention.
The problem we are trying to solve for our users is how to elevate more relevant content so that they are able to watch more videos that are akin to their interests. Understand that their interest may change everyday. Some key features for this would be:
– Users would need to opt-in and sync their face and camera up to make sure they work. There will also need to be an opt-in everytime a user surfs over to Youtube.
– A backend database that tracks videos you’ve watched and also the change in emotion from the beginning of the video and through the end
– A front-end display showing the users how their emotions have changed throughout the video itself
– Asking the users how they really felt about the video compared to the emotion captured from the technology
– A algorithm that can match your personal preferences with your emotional response for the next video you want to watch.
Within the algorithm, there’s a lot more features that we can build. Like what does emotions tell us? What does each of the four grids really mean in terms of how they feel about a particular video? I think that’s why we pare it with the actual ratings the user provides to inform us how to move forward.
All of the bullet points would be key to the first launch as it provides us with all the information we need to continue to iterate on the recommendation engine. We will know the recommendation engine is working when we pare a level of internal confidence metric with the user’s actual rating. Also through traditional metrics like how many videos are the users watching one after another.
In summary, I would leverage the emotion engine that Google created to deliver better recommendations for users on Youtube. At first launch, we should take care of all user privacy concerns by making it opt-in and then gather data both through the technology and with user input to help our machine learning algorithm produce better results in the future.