Press "Enter" to skip to content

Artificial intelligence and popular culture

In the twenty-first century, technology is progressing more rapidly than ever. Moreover, these innovations are swiftly being integrated into the everyday life of the average consumer. Artificial intelligence (AI) is one of these emerging fields that is also breaking into popular culture. The Brookings Institute defines artificial intelligence as “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention” (West). These machines are able to analyze data and instantly make human-like decisions. While merely decades ago this may have seemed like something found only in science fiction, artificial intelligence is drastically altering American lifestyles and has been widely explored in popular culture through various media outlets. Since its debut in 2011, the hit British television show Black Mirror has explored the possibility of the overextension of technology in many different, shocking scenarios. In its second season, an episode entitled “Be Right Back” investigates the use of artificial intelligence as a method of overcoming grief. The viewer follows Martha, a young artist, through the death of her boyfriend Ash in an unexpected accident. At Ash’s funeral, Martha’s friend, Sara, tells her about a new program that will allow her to speak to Ash again by analyzing his public social media posts and even private messages, saying, “the more it has, the more it’s him” (“Be Right Back”). At first, Martha is dubious, even angry, but once she discovers that she is pregnant with Ash’s child, she decides to give the program a try. At first, she communicates with “Ash” solely through emails, but, after some time, this evolves into phone calls. The software uses videos of Ash to replicate his voice and speech patterns; it evens adopts their special phrases like “throwing a jeb.” Still, after time, Martha becomes frustrated with the limitations of this virtual interaction. She finds out that she can order a physical body- essentially, a robot that looks and acts exactly like him. At first, it is great; she finally has her partner back. However, it is still not really Ash. The artificial intelligence software can never fully replicate his humanness; as Martha says, “you’re just a few ripples of you… you’re just a performance of stuff that he performed without thinking, and it’s not enough” (“Be Right Back”). This episode perfectly encapsulates the primary limitation of artificial intelligence; lines of code cannot fully navigate the complex pathways of human emotion. Charlie Brooker, Black Mirror’s creator, has stated that his goal is for viewers to realize how “nuts-deep in the future we already are” (Harvey). Many technology companies have come out with their own versions of “virtual therapists”- AI bots much like the one Martha used, designed to help consumers cope with grief and mental health disorders. One prominent example of this was created after Eugenia Kuyda’s best friend was killed in a car crash. To help with her grief, Kuyda adapted a pre-existing virtual assistant program, Luka, to become a robot version of her late friend, much like the program Martha used to replicate Ash. Now, this same adaptable software, renamed “Replika” has been downloaded over 10″,000 times (Murphy). After exchanging messages with the user and analyzing linked social media accounts, Replika begins to mimic speech patterns, emoji use, and even sense of humor (Poyant). By working with psychologists, Kuyda was able to create a program that helps people to become more mindful and develop introspective, problem-solving skills. So, while Replika is not able to empathize with the user in the way a traditional therapist can, it has proven effective for large numbers of people. As counseling psychologist at the Nightingale Hospital in London, Monica Cain, stated, “It’s never going to replace a human interaction”,” but aspects of the program “can be enormously helpful” (Murphy). Consumers are not only intrigued by these interactive bots; artificial intelligence has also transformed the automobile industry. In 2015, Tesla released a software update officially named Tesla Version 7-0, but more commonly referred to simply as “autopilot” (Bradley). Immediately, the internet was flooded with videos of Tesla users testing out their cars’ new feature, cruising down the highway hands-free. Still, over two years later, the public’s fascination with autonomous vehicles has not dissipated. In early February of 2018, entrepreneur and Youtuber Jeffree Star released a video entitled “Doing My Makeup on Tesla Autopilot” that has since accumulated over 5.6 million views (Star). In the video, Star experimented with the autopilot feature of his new Tesla Model X by going hands-free on the Los Angeles freeway. While these actions show his trust in the autonomous technology, he begins the video with a warning for all his viewers to not “attempt to reenact this under any circumstances” (Star). While Star does have some complaints- the program breaks a slightly abruptly and cannot handle sharp turns- his overall impression is positive. The success of this video demonstrates the current cultural fascination with Tesla simply because of its unique autopilot feature. While artificial intelligence programs like Tesla’s autopilot and Replika have had huge commercial success, significant drawbacks and consumer concerns remain. Artificial intelligence software does not have the capacity to make complicated emotional or moral judgments as humans do. In the case of autonomous cars like Teslas, there exists what is essentially a modern-day trolley problem. Sometimes whilst driving, a crash is inevitable; how can the car decide between harming its passengers or those of other vehicles? Noah J. Goodall, a researcher at the Virginia Department of Transportation and the University of Virginia, argues that software developers can employ risk management techniques to make these difficult ethical decisions (819). However, this still leaves great power in the hands of programmers creating these “risk management” algorithms that potentially decide between the deaths of two people. There is no straightforward answer to the complex ethical questions autonomous vehicles sometimes face, but Goodall’s proposal of a combination of clear, traceable logic and widely accepted ethical principles is a valid solution. Some scientists like Jason Megill of Bentley University do not believe that emotional qualia are essential to cognition (Megill). But, if the goal of artificial intelligence is to mimic or replace human decisions and behaviors, emotion must be considered. One way software developers are combatting the emotional limitations of artificial intelligence is through the implementation of artificial emotional intelligence. “Emotion AI” systems are being developed to recognize and react to complex cognitive states by analyzing both verbal and nonverbal expressions of emotion (el Kaliouby). The obvious application of this is in therapy bots like Replika; empathy and the detection of nonverbal cues would greatly increase effectiveness. However, artificial emotional intelligence would also be a variety of other applications, including automated vehicles. The car could sense fatigue or inebriation and take over or simply use emotional awareness to tailor its settings to the passengers’ preferences (el Kaliouby). As the technology behind artificial intelligence progresses, so does the cultural fascination. Various applications of AI have been explored through television, online content creators, and other sources of popular media. As the field grows, researchers continue to figure out new ways of addressing concerns, both with a logical approach like that of Goodall’s risk management and a psychology-based approach like emotion AI.

Other essay:   Intelligence

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

0 Shares
Share via
Copy link

Spelling error report

The following text will be sent to our editors: