MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
people
Search

AI chatbots are people, too. (Except they’re not.)

Tuesday July 23, 2024. 12:00 PM , from ComputerWorld
During the last two years of the Great Depression, Westinghouse built a robot. 

One of several famous “mechanical men” built in that era, the company’s Elektro robot was created to showcase Westinghouse’s electrical engineering prowess at the 1939 New York World’s Fair.

Elektro amazed crowds. Standing seven feet tall and weighing 265 pounds, the humanoid robot could walk, talk, blow up balloons, move his head and arms, and even smoke cigarettes. His photoelectric “eyes” could distinguish red and green light.

Elektro could speak about 700 words using a 78 rpm record player installed in its torso. The words were delivered by a series of turntables connected to relay switches.

After dazzling crowds nationwide in the 1940s and suffering waning enthusiasm in the ‘50s, Elektro was dismantled and put into storage. Later, it was rediscovered, re-assembled, and showcased again — it even appeared in a 1960 comedy film for adults called “Sex Kittens Go to College,” starring Mamie Van Doren. Elektro “played” the “role” of a campus robot called Thinko who could calculate the future, predict winning lottery numbers, and pick the outcome of horse races. (The entire movie is on YouTube.)

Throughout the whole Elektro craze, the public attributed human-like qualities to the robot and assumed it was the beginning of robots that would join human society and the workforce, working both with, and in competition with, humans. 

In other words, Elektro was the ChatGPT of its era. 

Why humans hallucinate about AI and robots

One common complaint about large language model (LLM)-based chatbots like OpenAI’s ChatGPT is that they “hallucinate” (generate information that seems plausible, but is actually false, inaccurate or nonsensical.) But people also “hallucinate” (experience something that isn’t there). Specifically, people tend to believe that AI chatbots experience human-like thoughts and feelings. 

A survey study conducted by researchers at University College London and published in the journal Neuroscience of Consciousness found that a large  percentage of the public believes LLMs experience consciousness, emotions and memories. 

The study, “Folk psychological attributions of consciousness to large language models,” concluded that AI’s ability to generate coherent, contextually appropriate, and seemingly empathetic responses makes two-thirds (67%) of surveyed Americans believe it has understanding and emotional depth.

In fact, the more sophisticated and human-like the AI’s responses are, the more likely people are to perceive it as having consciousness, which bodes ill for a future where AI gets much better. 

The Lattice HR fiasco

The HR software company Lattice recently announced a new policy of treating AI personas as “digital workers” equal to human employees, complete with official employee records. 

Lattice CEO Sarah Franklin said the company intends to onboard, train, and assign goals to AI personas. “By treating AI agents just like any human employee, businesses can take advantage of their utility while still holding them accountable for meeting goals, Franklin said. 

It’s clear from Franklin’s statements that she believes the move represents the future of how AI tools will be treated in the future. 

But then she got slammed in the comments of her Linkedin post announcing the move, and the company backed down, saying it would no longer be treating AI personas like human employees. 

The problem with thinking AI can think

A majority of people is deluded into thinking AI can think or feel. Companies are tempted to treat them like humans. And companies in Silicon Valley and around the world are racing to make a AI chatbots and assistants with “emotional intelligence,” human-like vocal patterns and even creative decision making and creative agency.

It’s clear why people believe AI can think and feel. Our Paleolithic-hominid brains are hardwired, then trained in infancy, to distinguish between what is human and what is not by noticing which animals in our world can hold a conversation, convey emotions, use reason, exhibit empathy and communicate with non-word sounds, facial expressions, hand gestures and body language. 

Human parents get right in the face of their newborns on day one, talking to the baby. Every day of our lives, people are talking to us. We learn to do the same. We learn early that talking is the exclusive province of our species. 

It’s natural that when we hold a conversation with AI, our conditioned brains tell us that the AI’s capacity for conversation is based on the same stuff as our own capacity. That AI’s ability to speak is a deliberately crafted illusion feels counterintuitive. 

As a species, we’re clearly slouching toward the normalization of confusing software with humanity. 

It’s a tool, not a colleague

The first problem with believing AI has thoughts, feelings, consciousness and/or knowledge is that we won’t use it as effectively. AI is a tool, not a colleague. Mastery of that tool ultimately demands that we understand what it is and how it works. 

In our work, we’ll increasingly use AI tools for cybersecurity, IT administration, software development and architecture, hardware design and 100 other tasks. Despite the fuzzy language of the tech industry, we’ll succeed best with tomorrow’s tools by understanding, using and exploiting them — not “partnering with AI.” 

The second problem is that we might trust AI, relying far too much on next-generation agentic AI to choose its own path to solving our problems and achieving the goals we set for it. Instead, we need to understand the potential hazards, and make sure it does what we want it to do without accidentally enabling it to “steal,” “cheat,” “lie” or harm people on our behalf. 

Don’t fear the robot

If the public believes AI has human-like qualities, they’re likely to fear it. AI is not human, and behind its programmatic simulation of human speech and artificial emotional intelligence, it’s just a machine with all the humanity of a toaster. AI will always creep people out if they believe it’s “thinking.” As a result of this fear, a huge percentage of people in the future might well refuse AI-based medical interventions and other beneficial or even life-saving drugs, therapies and emergency care. 

Some large percentage of people might want to avoid or attack robots and robotic tools and vehicles, believing them to be malicious. There’s no moral dimension to this in terms of harming mechanical beings. But robots are property, and vandalism of others’ property is unethical. 

And finally, the biggest reason to reject the intuition that AI is human-like is that it diminishes the value of humanity. To treat machines as human is to treat humans as machines. We should all strive to treat our fellow humans with dignity, respect and consideration. And we should avoid the delusion that a machine deserves the same, just because we programmed it to talk.

Sorry to break it to you, mid-20th Century Americans. Elektro the robot was not a “mechanical man.” It was just a man-shaped can of electrical and mechanical components purpose-built to dazzle rubes at the county fair.

Likewise, AI can’t think or feel. It’s just a complex system for organizing information, combined with a natural-language user interface.

We are entering the age of AI. Let’s use these new tools. Let’s control them. But mostly, let’s enter this era without delusion. 
https://www.computerworld.com/article/3475964/ai-chatbots-are-people-too-except-theyre-not.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Sep, Sun 8 - 01:20 CEST