Tuesday, October 08, 2024
73.0°F

Reining in AI, top-down

by SHOLEH PATRICK
| December 5, 2023 1:00 AM

Technology is always a mixed blessing. Develop it unchecked by a sense of ethics, and we’ve learned it costs too much in human health, equanimity or safety.

Typically, that elusive balance tends to fly under the radar until after someone, or a lot of someones, gets hurt. But AI is different. AI scares us.

When I say it scares us, I’m not just talking about Hollywood-style paranoia about a robot-ruled world. We’ve already felt effects of unchecked use of artificial intelligence by both legitimate and illegitimate players, from annoyingly frequent and surprisingly personal ad popups to election manipulation, with love from Russia.

We ain’t seen nothin’ yet. Shortly before President Biden signed a sweeping, 36-page Executive Order 14110 on AI, his staff showed him an AI-generated video they made of him saying things he didn’t say. With a recording of just a few words you actually did say, the right AI software in the wrong hands can extrapolate the rest, making it look like it’s really you talking, writing, authorizing.

Now imagine that on a national and global scale and you can envision what Homeland Security, as well as world leaders with an eye on red buttons, might be worried about.

AI does a lot of good, obviously. We already rely on it for medical diagnosis, tools and procedures; financial and investment information accuracy; facial recognition software and accessibility tools; and a growing breadth of commercial applications (many in phones, TVs, and iPads).

We wouldn’t want limiting it against harmful effects to get in the way of the good it can do, or suppress basic choice and invention. But let it go unregulated, and the potential harm is like no other technology that’s come before.

So far, what limitation exists has been mostly voluntary. Fifteen leading tech companies agreed to develop “safe, secure and trustworthy” AI. But as we know, bad actors aren’t constrained by mere ethics, and even innocently developed technology can be adapted to bad ends.

With that in mind, the EO requires:

Sharing critical info: Developers must safety-test and analyze AI models that pose “a serious risk to national security, national economic security, or national public health and safety (and) notify the federal government when training the model.” 

Establishing safety standards: The National Institute of Standards and Technology is working on testing and safety standards. The Departments of Homeland Security and Energy will apply them to critical infrastructure and other risk areas. Simple as it sounds, this is a groundbreaking new step in American security.

Fraud protection and cybersecurity: Back to voice extrapolation and deep fakes, new federal standards are being established to detect and authenticate AI-generated content, with help from the Department of Commerce for use by both government and the private sector — a kind of updated “watermark” anyone can see for authentication. Cloud/internet providers are also required to tell the government about foreign customers. As the president put it, so “when your loved ones hear you on the phone, they’ll know it’s you.”

A privacy call to Congress: The EO asks Congress to pass bipartisan data privacy laws to better protect both adults and kids, ironically using AI to clamp down on privacy attacks and breaches by AI.

Civil rights protection: Some uses of AI have led to deeper discrimination and bias (with so much of ourselves out there digitally, being anonymous has become impossible). The White House published a “Blueprint for an AI Bill of Rights” including an order for federal agencies and the justice system to combat such algorithmic discrimination, whether or not intentional. A guidance to landlords and federal contractors will follow suit.

Consumer protections: Advancing responsible use of AI in health care while establishing AI safety protocols. How far this will extend in other areas is yet to be seen; the U.S. Chamber of Commerce pointed to a need to balance consumer rights against those of businesses.

Promoting innovation: The U.S. already leads in AI startups and capital, but this industry is fast and competitive. The EO launched a National AI Research Resource for researchers and students, and expands access and grants in key areas such as health care and climate change.

While it also calls for more high-tech government hiring, salaries eclipsed by the private sector will make that challenging.

Two days after this order was signed, delegates from 28 nations, including the U.S. and China, agreed at a U.K. summit to work together to contain the “catastrophic” risks posed by runaway AI. Back in July, the UN Security Council — again attended by AI giants U.S. and China — expressed similar concerns. With any luck, enough nations will follow through and we can share more of the same reality again.

To read the rest of this and other executive orders see www.federalregister.gov/presidential-documents/executive-orders.

• • •

Sholeh Patrick, J.D. is a columnist for the Hagadone News Network. Email sholeh@cdapress.com.