ANALYSIS: Unintelligible intelligence
In an earlier article, I introduced artificial intelligence (AI) and my views of its dangers. (CDA Press, May 12, 2023 and Blog.UylessBlack.com). I called AI unintelligible intelligence, because AI experts often do not understand what AI operations are doing — even the specialists who wrote the software for the AI operations.
I received comments criticizing my “unwarranted” alarm. Because AI runs on different kinds of computers (PCs, robots, etc.), one reader said if AI was getting out of hand, just pull the plug on the computer.
The problem with this action is that humans will become dependent on an AI application to such an extent they cannot simply stop it from doing its job, say managing the electrical grid of an area in the country. Why? Because the humans managing this grid will gradually become dependent on this software performing its inexplicable operations, including the adding of functions and capabilities beyond the humans’ original intent of its mission.
Does that sound far-fetched, even impossible? The earlier article on AI provided an example of chess-playing AI software training itself to become a grandmaster chess player -— all without being given the rules of the game by its creators. It learned and adjusted by the data it had been given of many previous chess matches. The data consisted of the moves and results of previous games. As the article explained, the experts who wrote the code (using sophisticated computer-based neural networks mimicking the human mind) “did not know what their code was doing.”
That is a worrisome prospect. In the power grid example above, the AI could have trained itself for the power grid components to be dependent on the AI for their directions. Pull the plug on the AI-based computer? The power grid goes down, and the humans in charge of this grid do not know how to reactivate the grid without the AI system.
Again, far-fetched or impossible? Consider a “60 Minutes” report that aired on CBS on Oct. 8. The program featured Geoffrey Hinton, a recent high-level employee at Google, and often called the “Godfather of Artificial Intelligence.”
In the interview, Hinton warned that AI has the potential to one day take over from humanity. He said, “I'm not saying it will happen. … If we could stop them [AI systems] ever wanting to, that would be great, but it's not clear we can stop them ever wanting to.”
Industry leader, Sam Altman, is the CEO of OpenAI, the company that developed ChatGPT, a popular AI product. He has stated artificial intelligence could “go quite wrong.”
As suggested in the earlier article, AI can go beyond its programmers and write its own computer code, say code that allows it to learn chess-playing strategies or to control an electrical grid.
Hinton said AI systems will eventually have self-awareness and consciousness, and reason better than people can. He added, “I think we're moving into a period when, for the first time ever, we may have things more intelligent than us.”
In an April interview with “60 Minutes,” Google CEO Sundar Pichai said the company was releasing its AI advancements in a responsible way. This writer questions how Google can release AI advancements in a responsible way if Google software engineers cannot control how AI software builds its own behavior.
It is claimed control can be achieved by “training” the AI system with selected input (massive amounts of data) that restricts AI’s ability to ad-lib its behavior. As discussed above and in many studies, no one —even industry leaders such as Hinton, Altman and Pichai — knows if AI in the near future can be controlled by its creators, humans.
Why? Because AI goes beyond the data it has been “fed” in that it makes millions of comparisons (in fractions of a second) of the data elements and stores (remembers) cogent associations that are beyond the capability of mere mortals’ minds. As a result of these associations, AI can adjust its behavior.
This writer has reviewed and studied many sources of information on AI. Unanimously, all AI experts warn that, as stated in the “60 Minutes” program, “Society needs to adapt quickly and come up with regulations for AI in the economy, along with laws to punish abuse.”
This requires a different approach to making laws and overseeing them: regulating something that is not completely understood. In addition, governments are not responding quickly, and often do not even know how to respond. A governmental AI guru and minions? Congressional action? Laws…of what kind?
How to punish abusers and force adherence to rules/laws about something that no one … no one … fully understands?
International laws? Every advanced country, China and the U.S. as two examples, knows AI is key to their future in commerce and in the military. How can AI be “corralled” when a country’s well-being might very well rest on its leading the world in AI technology?
Meanwhile, AI development, largely driven by private industry with minimal governmental guidance and oversight, is breaking through any happenstance, an improvised corral that may have been set up by the Googles, Apples and Microsofts of the world. … Including counterpart organizations in China and Russia.
The AI vendors — hundreds of them — while admitting governmental corrals are vitally needed, are proceeding at a furious pace to build their own proprietary commercial corrals.
During his career, Uyless Black consulted and lectured in 16 countries on computer networks and the architecture of the Internet. He lives in Coeur d’Alene with his wife, Holly, and their ferocious 3-pound watchdog, Bitzi.