Saturday, October 05, 2024
60.0°F

ANALYSIS: The Danger of Artificial Intelligence? Unintelligible Intelligence

by UYLESS BLACK/Guest contributor
| May 12, 2023 1:00 AM

The subject of artificial intelligence, or AI, has become a favored topic in the news media. It is being lauded as well as belittled. The technology has been around for decades, but in a somewhat rudimentary form.

This writer coded a primitive AI software application for the Federal Reserve in the 1970s. It simulated the money supply of the nation. The software replaced several economists, accountants and clerks. That was not my intent, but that is one of the things AI does: It replaces humans.

Currently, with the advancement of high-capacity (fast-processing) computers and the growing sophistication of software, AI’s power is being realized. My ancient software would pale in comparison to what exists today.

AI is finding its way into the mainstream of our lives. Here are a few examples: Smart cars driven by AI. Human-like robots, with AI controlling “eyesight,” “hearing,” walking and grasping. Surveillance of just about anything. On and on. You name it, AI is slated to become part of it.

What is AI?

AI is very smart software programmed by humans to perform tasks in machines (robots, for example) that normally require human intelligence. As examples, speech generation, facial recognition, creating correspondence that appears to be created by our brains.

So, what’s the problem?

At first glance, AI seems to be a godsend to us humans. Untold conveniences and enhancements to our lifestyle are within the capabilities of artificial intelligence.

True, but since the 1970s, this writer has had a deep concern about the power of computers, generally, and the power of AI, specifically.

My concern is the ability of AI to intrude on cognitive privileges of the human mind. Of the danger of AI superceding the heretofore mastery of mind over machine. Of the danger AI poses to the vital and cherished intellectual warranties of that vital organ, our brain.

I have coined a term that identifies this danger: Unintelligible Intelligence. Here’s why.

Invisible Operations of AI: Unintelligible Intelligence (UI)

Since the inception of software programming in mid-20th century, a programmer has written and controlled the code, the logic, that runs on a computer to produce required and designed output. It is the programmer’s responsibility to make certain the software’s output agrees with the output produced by humans, say, a person using a calculator.

Before the automated system is used, its output is analyzed and evaluated by soon-to-be users of the software. These customers work alongside the programming staff to ensure the software is correct, that it has no bugs (errors).

In the event of problems, such as miscommunications between programmers and users or errors on the part of the programmers, the code can be changed. In addition, while the system is in operation, the code can be upgraded to enhance value to users.

Programmers can do all this because (a) they wrote the code and, therefore, (b) they know what the code is doing.

AI: A mind of its own

In 2018, the company AlphaZero developed a chess-playing program that was only given the rules of the game, with no input from well-known and accepted chess strategies.

The software trained itself by self-play to become one of the best players in the world, often beating grand masters. The program accomplished this feat, unaided by humans, in less than 24 hours. It did not use classic chess strategies or practices that had been developed over hundreds of years.

Chess experts considered its moves to be, “... counterintuitive, if not simply wrong.” One of the owners of the company said AI, “... is no longer constrained by the limits of human knowledge.”

Imagine: Programmers of this software could not trace their own code that led to the program’s output. Unlike conventional software, the system produced non-traceable results. Creators of the code could not associate it with what it had done — defeat most of the chess players it took on. Unlike past practices of software engineering, they did not know what their code was doing.

AI is also called deep learning. As one expert puts it, “Deep learning has its own dynamics, it does its own repair and its own organization, and it gives you the right results most of the time. But, when it doesn’t, you don’t have a clue about what went wrong and what should be fixed.”

This aspect of AI is deeply troublesome. Consider the issue of humans attacking other humans with nuclear weapons. Left to its own, who can guess what AI might be capable of a few years from now? Will we have the wisdom to build AI platforms dealing with weapons of mass destruction to control their self-taught logic which, like the chess-playing program, might exhibit independent, that is, unintelligible behavior?

As another example, genetic engineering technology is experiencing a revolution. It is changing rapidly with each day as geneticists discover and invent improved ways of modifying DNA. Will humans have the capability to control AI’s manipulation of genes as DNA alteration technology progresses? Will we be able to control the robot that is performing the genetic operation?

What happens if, as stated above, “... it gives you the right results most of the time.” But on those occasions when it does not, then what? If the programmers cannot control the software they themselves wrote, the launched missile might not do what the humans intended. We humans might not be able to control a complex robot-controlled genetic alteration.

Given the path that AI is on, as evidenced by the chess game described earlier, we would not be able to determine the code and logic that went into the missile launch or the gene cut-and-insert operation.

Stay aware and be skeptical

However artificial intelligence evolves, it will have a transformative effect on humanity. Whether we guide AI, or have AI guide us is, at this time, still up to us. The folktale of Chicken Little warning that, “The sky is falling,” turned out to be nothing but a falling acorn.

AI is no acorn.

As AI finds its way into the nooks and crannies of our lives, we humans should keep the Law of the Instrument in mind. It is exemplified by the child who, when given a hammer, looks for something to pound.

In the meantime, sleep tight, Chicken Little, and hello, Hal.


During his career, Uyless Black consulted and lectured in 16 countries on computer networks and the architecture of the internet. He lives in Coeur d’Alene with his wife, Holly, and their ferocious 3-pound watchdog, Bitzi.