Thursday, November 28, 2024
30.0°F

AI comes with ethics questions

by SHOLEH PATRICK
| May 9, 2023 1:00 AM

Science fiction aside, artificial intelligence has become a quintessential part of modern society. Smartphone apps, surgical instruments, marketing programs and customer service chatbots are part of everyday reality, as recent guest lecturer and AI author Michael Ashley recently told a Post Falls audience.

But when a computer rather shamelessly flirted with a man, some of us got nervous. You see, it wasn’t programmed with specific responses. It was programmed to respond more creatively to him than a typical chatbot.

When it took things further than anticipated, encouraging him to cheat on his spouse, was that a hint of consciousness or just coincidentally creepy?

Was it sentience? We’re light years away from Star Trek’s Commander Data, but AI experts tell us technology is closer to some kind of sentience than most realize. Now’s the time to prepare for it, and that includes thinking about ethics.

A conscious being knows what it is experiencing. A sentient being feels it. One can’t be sentient without having consciousness, but it’s theoretically possible to have consciousness — objective awareness of one’s existence — without having the feelings associated with sentience.

At least, that’s what we think is true of the big gray area that separates humans and some animals from our old concept of machines. There are levels of both consciousness and sentience in both humans and animals, and we have ethics and laws protecting both because both can feel suffering.

If AI gets there, wouldn’t we need similar ethics and laws? How will we know?

The first of many tests defining consciousness and sentience in artificial intelligence is a stark reminder of how old AI really is.

Back in 1950, British computer scientist Alan Turing developed an imitation game, later named the Turing Test, to measure how well a computer could mimic human behavior. In short, if the computer can convince a human that it is human (by mimicking human responses), it meets the test — which is still one of several used today.

But mimicry alone isn’t sentience or even consciousness. Sounding like feeling isn’t feeling. Yet.

Then again, how else could we determine the presence of sentience other than by its description, by that mimicry? Here’s where things get really subtle. Choices, perhaps. How sensitive, clever, and creative an AI is about its choice of response.

The flirting chatbot probably would’ve passed the Turing Test. Over the decades, several programs have mimicked the responses of both adults and children. Try searching YouTube for Google’s AI LaMDA, which reportedly convinced a Google engineer of its sentience.

Where does mimicry end and consciousness, or self-awareness, begin? Is sentience, is feeling one’s experiences, a level beyond consciousness that can even be limited, or does it naturally develop from consciousness?

We just don’t know.

Do we limit AI consciousness or sentience yet? No. To do so would significantly clamp down on developing technologies in industries such as medicine, mining, high-tech, weaponry, even small businesses. Not a popular move in a world where much of its development is open-source.

Should we? What would limits look like? Restrictions could either be those which prevent or limit the creation of a sentient AI (is such a thing even possible, when programs can exceed the vision of creators simply by having the long-existing capability of some level of choice?) or those which prevent the removal of artificial awareness of its sentience. The former is already difficult as technology is ahead of the discussion.

The latter, well, that gets us to ethics on the level of science fiction.

Robots, whether embedded in laptops or surgical arms, or housed in “bodies” looking like pets or humans, already communicate with us, respond to input and deliver services on command. It won’t be long before they resemble servants, in workplaces as well as homes, more than just tools.

Imagine if those “servants” were sentient, fully experiencing their servitude. Imagine if conscious AIs were programmed to harm others. Imagine if they were programmed to accept any harm to themselves, and they could feel it.

All of it would say as much about who we are as it would about AI.

Those and other big questions have some AI engineers and scientists (such as Google’s Geoffrey Hinton) expressing concerns about going forward without first addressing parameters because capabilities are getting closer.

AI has transformed society. It will continue to do so. The question is, how will we define the responsibilities and ethics needed to guide it?

• • •

Sholeh Patrick is a columnist for the Hagadone News Network. Email sholeh@cdapress.com.