How do you help an autistic person ‘see’ emotions? A major drawback for people with such conditions is that they struggle to connect their perception with their empathy. Most of us take it for granted that we can read another person’s emotions through subtleties such as body language, yet this is a real struggle for many others.

Enter emotion AI. Researchers at Stanford University modified Google’s augmented reality glasses to read emotions in others and notify the wearer. The glasses detect someone’s mood through their eye contact, facial expressions and body language, and then tell the wearer what emotions it’s picking up. Stanford calls this the Autism Glass Project, but it’s telling that its users have dubbed them the Superpower Glass.

“Emotion AI taps into the individual,” explains Zabeth Venter, CEO and co-founder of Averly, a South African property rental company that uses emotion AI. “If you think about facial recognition, which is a kind of emotion AI, I can pick up if you like what I’m saying by whether your smile is a smirk or a real genuine smile.”

Such nuances go deeper. Another example is polling: what is your favourite colour? Maybe it’s purple. But did you say that enthusiastically? Did you hesitate? Did you just say it to say something? Did you even understand the question? We simply can’t get this level of context from the available surveys, sales data and the many other ways we try to understand humans through information. But through emotion AI, we can grasp incredible nuance.

The problem with AI bias

AI is in a quandary – it’s handy but also potentially biased. A notorious example is how, several years ago, Google’s photo AI mistakenly tagged people with dark complexions as gorillas. Why did this happen? Thankfully there isn’t some racist cabal working at Google. Instead, the data used to train the AI lacked the diversity to make a proper distinction, leading to a phenomenon called AI bias.

“How the data bias comes in results from how good the data is and the understanding you have of your data. It’s very easy as a machine learning company to just go broad and find public data on the internet. That’s a cheap way of doing things, but you’re not getting accurate representation. Ethical AI is about looking for that nuance. If I’ve built my machine learning model to just cover one demographical group, and now I rate everybody else with that model, that’s just plain stupid.”

Emotion AI is doubly vulnerable to this issue. Emotions are more abstract, and even cultural nuances are often essential to understand someone’s perspective. In the rental market, such misunderstandings can lead to bad experiences for landlords and leasers alike, says Venter.

“If we want to have that relevance in the rental space, I shouldn’t go and search for data that just offers generic data points. You should understand your rental market, understand the tenant, agent and landlord. If we get that right, we get people to engage with each other more authentically.”

AI matters because relationships matter

Relationships are crucial to success in many transactions, especially property rentals. But it’s a concept we often paid lip service to: companies may talk about relationships and customer-centricity, but they quickly fall back on other, more convenient positions. Then the pandemic happened, placing a renewed emphasis on relationships. And AI is the conduit to understanding those connections better.

“If you do not have great relationships, how do you build them? How do you get people to understand what’s important to others? That’s really what you can do with AI.”

Using AI to create authentic context and relationships with others, especially customers, is key to Africa’s development. Many Africans primarily access information and services through their phones – AI is a terrific way to engage with such audiences, providing it has that nuanced data that respects local differences.

This creates a fantastic opportunity for the continent. Most AI – and their biases – come from the developed world. It’s fair to say that data bias exists because those places don’t understand the local contexts of the places their technologies reach. But AI trained in those localised environments is very powerful and progressive.

Emotion AI is at the cutting edge of what ethical AI should be. To get there, you need two things – relevant local data and diversity among those using the data.

“Even something like who’s analysing your data makes a huge difference,” says Venter, noting Averly’s workforce is more than 40% female. “If you have a bunch of guys looking at data, they might think something is not important when it is. If you have women only looking at it, they might have a completely different view. It’s about having a balanced view, and making sure you analyse your data in that sense, that creates fair and useful AI.”

Emotion AI can help autistic children read emotions as much as it can connect the best landlords and renters for long term success. Artificial intelligence could be the most incredible technology we’ve ever invented. If it is honed, trained and used in the local context, made by locals for locals, it will change the world for the better. As long as companies such as Averly harness Emotion AI that creates empathy, connections and relationships, the possibilities are amazing.