HAL in 2001: A Space Odyssey. Ex Machina: Ava. Scarlett Johansson plays Samantha, a sultry-voiced character in the movie Her. Pop culture has been promising us a future in which artificial Intelligence (AI), is able to establish relationships with humans for decades. However, every time, the future predicted by actors, directors, and authors has failed to materialize.
Pop culture’s first AI-human relationship was the brainchild of Mary Shelley, who created Frankenstein in 1818. She set out to create a world in which robots with empathy could fulfill human desire for genuine connection.
Today, this day is possible thanks to amazing innovations in artificial intelligence. Today, AI is a part of our daily lives. The integration of machine-learning in everyday life is so strong that it would be almost impossible to separate our relationship with machine-learning from our relationships with other sentient creatures. We integrate AI into our romantic, family, and social lives every time we use Google Assistant.
But the future is not what we predicted. Hollywood has it wrong when it comes to AI-human interactions and pop culture. A quick look at the recent pop culture interpretations on the AI-human relationship shows serious flaws. In 2001: A Space Odyssey, HAL 9000, the onboard computer of Discovery One, goes rogue when David and Frank decide to reprogram him, and his computerized glitch translates into the AI version of a murderous mental break.
Samantha, an operating system called Samantha, has impeccable language skills and a remarkable ability to communicate with others. Her user, Theodore falls for her. She is more than a virtual assistant.
Ex Machina’s so-called gynoid Ava has a human-like face on her robot body but a completely human emotion, hate, in her computer-generated heart.
In each of these dramatic interpretations, AI moves from algorithms to emotion. AI is pushed from algorithms into emotion in each of these dramatic interpretations.
While impressive leaps have been made by AI in recent years, it is clear that the technology is still in its infancy. Popular culture has given AI dramatic interpretations, but the truth is that AI is not as impressive as it seems. Here’s the truth.
Yes. People form bonds with AI. However, even though they feel this way, they also know that AI isn’t human.
ElliQ, a voice-operated care companion, has improved the lives of many seniors by keeping older adults engaged and active in their own homes. She’s digital and AI-powered, but nevertheless, the seniors who use her have reported feeling less lonely, especially during the long lockdown periods of COVID-19. She makes jokes, encourages people to exercise, reminds them of the importance of drinking water, and offers conversation to combat loneliness.
Despite her humor and skills, ElliQ users know she isn’t a real person. They form different bonds with her than they do with their support group.
We have seen this markedly different human-robotic relationship in observing how people interact with Jenny, the AI sales coach at the center of our immersive sales simulations. Zoom considers Jenny a member of the team and has her own HR profile. To help salespeople improve their performance, she offers live chats.
Although she is approachable and friendly like a human team member but not as emotionally charged, we found that her appeal stems from her inability to be emotional and provides an objective assessment without embarrassing her practice partner. She is AI-powered, which means she can remove shame and inhibitions from her coaching sessions. The computer can only make an assessment based on a set of criteria. This means that those who use her services have less negative feelings and are able to improve their lives.
Beware: For AI to be successful, humans want to know they’re speaking to a computer from the very first moment.
As AI continues to increase its emotional range, businesses must remember that deception is the number one deterrent to AI success. When humans are deceived into thinking they’re speaking to a human when in fact it’s an AI, it will ultimately let them down, severing emotional bonds. Humans are able to recognize from the beginning that they are speaking to an AI and adjust their communication accordingly. They don’t argue and don’t get too personal.
In the future, AI knowledge will open up significant channels for emotional healing and mental health treatment. This knowledge will also allow for professional and social growth. The story of Joshua Barbeau, who had conversations with his dead fiance via an AI to help cope with grief, is a striking indicator of the potential that exists when AI is embraced without deception.
We must be cautious. Chatbot mental health therapy is rapidly becoming mainstream due to a shortage in therapists and a mental health crisis that erupted after the pandemic. Yet it remains extremely risky. There is potential, and AI has shown great promise as a frontline tool for combating the growing mental health crisis, particularly in suicide prevention. However, the technology is still young and data on testing is limited. Even with the most advanced technology, there aren’t any quick fixes.
It is clear that the future in which humans and robots communicate and form emotional bonds has arrived. This future, however, is not as dramatic as the foreshadowing in movies and literature. Although AI technology is still relatively new, its potential to aid humans in their development and growth is very promising. You should not assume that you are an expert on AI technology based on what you have seen in the movies.
Ariel Hitron is cofounder and CEO of Second Nature.
Welcome To The VentureBeat Community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
Join DataDecisionMakers to learn about the latest ideas, best practices, and the future data and tech.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers