ABC new, just uploaded a video of an Uber self-driving car accident. Though it disturbs us psychologically, what is important to note from that video are 2 things.
- Every vehicle in San Francisco is attached to a sensor which updates its presence for GPS tracking purposes. The cyclist removed the sensor and walked in a wrong direction.
- The driver of the car is not alert. She is performing activities on the phone while driving.
There is an old saying -” It takes two hands to clap and make a noise”. In this incident, the driver and the cyclist both made mistake. The final result is a loss of a human life.
Now, to avoid such things further, especially in this techno-wing of emerging self-driving cars, the only way is to heed at every minute of our life. But, we are human beings and mistakes is are natural. Some could be reflexes, other be unthinking.As a solution, the In- cabin Artificial Intelligence software from Affectiva tracks the emotional state of driver and passengers through the journey.
Why In-cabin AI software for self-driving cars?
Most of the self-driving operate using the lidar, cameras and radars located outside the car. So, a car sees what’s going outside through these devices. Affectiva comes up with such an autonomous vehicle navigator which focus on what’s going on the car. It tracks the driver’s state of emotion and alerts the passengers about their driver effectiveness of driving the vehicle.
How is the In-Cabin AI going to help self-driving cars?
If you are drowsy or in a frustration, not able to drive the car but reaching your destination is important, you continue to drive. These are one such situations where you tend to loss driving control and could cause accidents.
What if someone/ something alerts you and your co passenger about your emotion?. The possibility for you to realize the need to take rest or need to switch the seat to other. Right!. Your co-passenger can take over the drive from then. If you are alone, you park your car and take a quick nap. Right!. That’s what exactly Affectiva is for.
Who is Affectiva and Where are they from?
It is a Boston-based startup. It came up with an innovative Industry-First, Real-Time AI Solution that scans driver’s emotions and sends an alert about driving effectiveness. The software is named as “Affectiva Automotive AI”. It is a part of MIT Media Labs, formed by Rana el Kaliouby the CEO & co-founder of Affectiva, Young Global Leader at the World Economic Forum, Greater Boston Area.
What are the features of Automotive AI software?
These are of two types Driver Safety Monitoring AI and Occupant Engagement Monitoring AI. The intention of both of them is to sense the emotion of the driver and the passengers while driving and sends a caution to alert themselves.
Features of Driver Safety Monitoring AI Software:
Driver Safety Monitoring AI tracks the facial expressions of the driver like Joy, Anger and Surprise factors relating the negative and positive valence psychology. Based on the driver’s fatigue or distraction level, the AI can determine if the car needs to handoff to other to take the control.
Features of Occupant Engagement Monitoring AI Software:
Suppose that your passenger’s mood is in a depressed state and a sad song is playing in your car. The AI can share its recommendations on songs, videos to change your mood.
Suppose that you are witnessing a climatic change and it is about to have a snowfall. You need clothes to protect yourself and unfortunately, you are don’t have any. The AI shares an e-commerce purchase recommendations.
The Occupant Engagement Monitoring AI Software is adaptive to environmental conditions such as lighting, thunderstorm, rain and snowfall and a Human Machine Interface assists you with recommendations to have a safe drive.
How does Automotive AI Software Work?
There are microphones, and in-cabin cameras fixed inside your car. The affectiva Automotive AI algorithm analyzes the driver’s facial and vocal expressions to identify his emotion. Those emotions are then scanned using EMotion SDK -the machine learningalgorithm, built using computer vision, deep learning, and speech science. They are processed internally with the massive amounts of real-world data of emotions within their Emotion SDK. The SDK with the help of natural networks, analyze acoustic-prosodic features like tone, loudness, tempo, and
pause patterns to identify the driver’s speech and actions. Then it draws the metrics relating emotions as depicted in the image below.
What are the software and hardware requirements needed?
Affectiva runs on c++ SDK in an embedded system runs on ARM64 and Intel x86_64 CPU architectures. It runs on almost all the OS available in the market. The SDK is filled with 7 emotions, 13 emojis, and 20 expressions. Using Machine learningalgorithms through classifiers, it analyzes the pixels of the face image to classify facial expressions. The Facial Action Coding System (FACS) will then classify the expression as facial expressions or action units. It then the combination are mapped to emotions.
Who are using the software?
Well, it’s a startup. We just need to see big gains in self-driving car industry like Uber Technologies, Inc.General Motors Company, Ford Motor Company, Toyota Motor Corporation, Honda Motor Co. Ltd. etc. are in need of such a technology to avoid accidents happening because of the Artificial intelligence technological innovations in automotive. There is a huge
Emotion SDK is already being in use by many countries across the world. Affective is being used globally and now became the world’s largest emotion data repository. Here are some of the top countries using them for various purposes.