+1 vote
42 views
Post and review answers and feedback to answers in the comments section of this post.
asked in Product Design by (549 points)
retagged by | 42 views

1 Answer

+5 votes

Exercise 73 – Google recently developed a technology that can detect human emotions in a 2 X 2 dimension (energy level and body movement). What are some of the products you can build using that technology? List a few ideas, then choose one and design the product for that idea.

Just to reiterate, google developed a technology that can detect human emotions based off of energy level and body movement. I would ask clarifying questions as to how this technology works. Does it require special cameras? Does it require a user to actually turn it on? What’s the level of confidence in the results? Does the user need to opt in to this program? What kind of human emotions are we talking about?

Assuming that the user does need to opt in, but it works off of a normal laptop or phone camera and does need to be turned on, I would start brainstorming ways that this might help solve an existing user issue.

– This app might be useful when it comes to detecting fatigue and general tiredness when working. Google could offer a product that would remind users when to go for some air or take a walk whilst working
– Emotions is useful to gauge how a user feels about any particular product. This would be especially useful for both the user and the app to understand how to optimize UI/UX, product placement, etc.
– Emotions would be useful for Youtube, where it can better hone in on video recommendations than a simple Thumbs up or Thumbs Down.

I want to actually build for number 3, because a better recommendation system would be kind of amazing.

The goal of this feature is to deliver a better recommendation system for users who opt-in to this technology.

The way we know the recommendation is working is by longer viewing times on Youtube, a general happier user from the emotion technology, and better retention.

The problem we are trying to solve for our users is how to elevate more relevant content so that they are able to watch more videos that are akin to their interests. Understand that their interest may change everyday. Some key features for this would be:

– Users would need to opt-in and sync their face and camera up to make sure they work. There will also need to be an opt-in everytime a user surfs over to Youtube.
– A backend database that tracks videos you’ve watched and also the change in emotion from the beginning of the video and through the end
– A front-end display showing the users how their emotions have changed throughout the video itself
– Asking the users how they really felt about the video compared to the emotion captured from the technology
– A algorithm that can match your personal preferences with your emotional response for the next video you want to watch.

Within the algorithm, there’s a lot more features that we can build. Like what does emotions tell us? What does each of the four grids really mean in terms of how they feel about a particular video? I think that’s why we pare it with the actual ratings the user provides to inform us how to move forward.

All of the bullet points would be key to the first launch as it provides us with all the information we need to continue to iterate on the recommendation engine. We will know the recommendation engine is working when we pare a level of internal confidence metric with the user’s actual rating. Also through traditional metrics like how many videos are the users watching one after another.

In summary, I would leverage the emotion engine that Google created to deliver better recommendations for users on Youtube. At first launch, we should take care of all user privacy concerns by making it opt-in and then gather data both through the technology and with user input to help our machine learning algorithm produce better results in the future.

answered by (116 points)
+2
Scott – amazing creativity! I was thinking of leveraging a tool like this to help non verbal people communicate. If you have an autistic kid, can you potentially use emotion recognition via facial expression recognition and energy recognition to predict whether they are feeling hungry, bored, tired, happy etc.
You might want to throw in privacy concerns too. There should be a way for someone to turn off the facial/energy detection by voice when they want their privacy protected. Or as soon as someone else comes in to the screen- let the application notify them that they are being watched.
Will you record videos? Will you store them on a server in order to analyze the emotions real time? Talk about how you should delete video/energy recordings as soon as emotion analysis is done unless user chooses to explicitly save the recordings.
+2
That’s a good one. Extending it to help blind people to help understand people expressions when they are conversing with other people in the real world.

Track people reactions for new products in retail.

Monitor health of collaboration in open office.

Please log in or register to answer this question.

Privacy - Cookies - Terms - Contact