NEWS

em3logo

Victor Shih, CTO at Braiq

30 April 2018

Beta release of eM3 cloud-based software module for multi-algorithm, multi-view, multi-person face based emotion analytics

Modern Artificial Intelligence (AI) systems use rich datasets to train models that process information and interact with us, but a vast majority of them miss a feature that would make them more widely accepted and “trusted”: a degree of emotional intelligence.

Think of the last time you were riding shotgun in a car – were you comfortable with the driver’s style of driving? Did you find yourself pressing on an imaginary brake or feel extreme frustration about how slow they were going? Chances are you were feeling discomfort, unease and/or frustration even though you were probably in safe hands. A perceptive and friendly driver would be able to infer your discomfort and modify their driving style to improve your ride experience. Unfortunately, if the driver is an AI, and unless it is actively monitoring your emotions, it would probably not be able to perceive your discomfort, unease and/or frustration. That’s where BRAIQ comes in. We at BRAIQ are looking into ways to incorporate emotional intelligence into today’s advanced AI.

Facial expressions are one of the richest and universally understood forms in which we convey emotion. Today we dive into one of the many ways in which BRAIQ analyzes passenger data and how we improve upon existing facial analytics using our eM3 Facial Analytics module – a streamlined software stack for efficiently integrating facial analytics in the mobility space and beyond. download

Multi-API
Many companies – big tech and startups – have been looking into facial data analysis, and provide API access to their analytics system. However, each company goes about this data analysis in a different way. eM3 utilizes readily available facial analysis API’s from Microsoft, Amazon, and Google, in a Multi-API solution to take the advantages of the idiosyncrasies of these API’s and combines them into more accurate and robust measure of facially expressed emotion. 

Combining multiple API’s allows eM3 to leverage all the benefits of ensemble machine learning. Since each API is trained internally on different sets of data, they do not always detect faces in different settings, while eM3, in essence, combines all data streams in order to increase the consistency and precision of facial detection across a wider population of passengers.nn

Each API uses their own machine learning methods in order to infer expressed emotions, but when you dig into each of the API outputs, they can differ drastically from each other. Our ensemble deep network trains using a variational autoencoder in order to take all the outputs from multiple API’s to draw latent variables that more accurately assess the emotions of each individual, thus increasing sensitivity to emotion onsets while also removing output noise. 

Multi-View
eM3 also integrates multiple views from different cameras into a single comprehensive analysis, opening up a new dimension for passenger analysis and more effective use of facial analytics. As an occupant of a vehicle, we want to be aware of your emotions no matter where you are looking inside or outside the cabin. views

By integrating various camera angles, we are able to assess facial expressions more accurately at all times. Our software uses facial verification to identify each person and keep track of them across multiple views and uses relative head position and other measures to intelligently fuse together data from different streams as people move and interact with their surroundings.

Multi-Person
eM3 is also designed with the future of ride-sharing in mind. eM3 collates the data for each individual being tracked and gives a comprehensive output without the headache of managing multiple data streams yourself. Our software is extensible to a few friends inside a car or a large group of people inside a bus. The ability to keep track of multiple people that move in an environment, changing orientation and position, thus going in and out of the coverage of individual cameras and as well as the sweet spot of APIs is one of the biggest assets of eM3. shutterstock_346541690

Future Updates
This is just the beginning for eM3, and we will be following up with additional features soon. We will be extending eM3 with multi-modal sensing capabilities to analyze other emotional signatures like heart-rate, eye-tracking, and skin conductance to build an even more comprehensive emotional evaluation. Stay tuned for updates on the eM3 platform as well as more insight into how it fits into our vision of incorporating emotional intelligence into artificial intelligence.

Be a BETA tester
If you are interested in getting access to the eM3 system and seeing how well it works on your data, please reach out to us at info@braiq.ai.