Current:Home > NewsExperts issue a dire warning about AI and encourage limits be imposed -WealthMindset Learning
Experts issue a dire warning about AI and encourage limits be imposed
Surpassing View
Date:2025-04-07 14:23:16
A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst.
Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said.
This interview has been edited for length and clarity.
Interview Highlights
Does AI, if left unregulated, spell the end of civilization?
"We don't really know how to accurately communicate to AI systems what we want them to do. So imagine I want to teach a robot how to jump. So I say, "Hey, I'm going to give you a reward for every inch you get off the ground." Maybe the robot decides just to go grab a ladder and climb up it and it's accomplished the goal I set out for it. But in a way that's very different from what I wanted it to do. And that maybe has side effects on the world. Maybe it's scratched something with the ladder. Maybe I didn't want it touching the ladder in the first place. And if you swap out a ladder and a robot for self-driving cars or AI weapon systems or other things, that may take our statements very literally and do things very different from what we wanted.
Why would scientists have unleashed AI without considering the consequences?
There are huge upsides to AI if we can control it. But one of the reasons that we put the statement out is that we feel like the study of safety and regulation of AI and mitigation of the harms, both short-term and long-term, has been understudied compared to the huge gain of capabilities that we've seen...And we need time to catch up and resources to do so.
What are some of the harms already experienced because of AI technology?
A lot of them, unfortunately, as many things do, fall with a higher burden on minority populations. So, for example, facial recognition systems work more poorly on Black people and have led to false arrests. Misinformation has gotten amplified by these systems...But it's a spectrum. And as these systems become more and more capable, the types of risks and the levels of those risks almost certainly are going to continue to increase.
AI is such a broad term. What kind of technology are we talking about?
AI is not just any one thing. It's really a set of technologies that allow us to get computers to do things for us, often by learning from data. This can be things as simple as doing elevator scheduling in a more efficient way, or ambulance versus ambulance figuring out which one to dispatch based on a bunch of data we have about the current state of affairs in the city or of the patients.
It can go all the way to the other end of having extremely general agents. So something like ChatGPT where it operates in the domain of language where you can do so many different things. You can write a short story for somebody, you can give them medical advice. You can generate code that could be used to hack and bring up some of these dangers. And what many companies are interested in building is something called AGI, artificial general intelligence, which colloquially, essentially means that it's an AI system that can do most or all of the tasks that a human can do at least at a human level.
veryGood! (7)
Related
- EU countries double down on a halt to Syrian asylum claims but will not yet send people back
- China says foreign consultancy boss caught spying for U.K.'s MI6 intelligence agency
- What to know about 'Lift,' the new Netflix movie starring Kevin Hart
- USDA estimates 21 million kids will get summer food benefits through new program in 2024
- Gen. Mark Milley's security detail and security clearance revoked, Pentagon says
- Gov. Kristi Noem touts South Dakota’s workforce recruitment effort
- 'Holding our breath': Philadelphia officials respond to measles outbreak from day care
- This Avengers Alum Is Joining The White Lotus Season 3
- The Grammy nominee you need to hear: Esperanza Spalding
- Kate Middleton's Pre-Royal Style Resurfaces on TikTok: From Glitzy Halter Tops to Short Dresses
Ranking
- Meet first time Grammy nominee Charley Crockett
- Kaitlyn Dever tapped to join Season 2 of 'The Last of Us'
- Amy Robach and T.J. Holmes Reveal NSFW Details About Their Sex Life
- Former UK opposition leader Corbyn to join South Africa’s delegation accusing Israel of genocide
- Hackers hit Rhode Island benefits system in major cyberattack. Personal data could be released soon
- DeSantis says nominating Trump would make 2024 a referendum on the ex-president rather than Biden
- More Than 900 Widely Used Chemicals May Increase Breast Cancer Risk
- US defends its veto of call for Gaza ceasefire while Palestinians and others demand halt to fighting
Recommendation
Trump issues order to ban transgender troops from serving openly in the military
What 'Good Grief' teaches us about loss beyond death
Russia says it's detained U.S. citizen Robert Woodland on drug charges that carry possible 20-year sentence
Record-breaking cold threatens to complicate Iowa’s leadoff caucuses as snowy weather cancels events
Scoot flight from Singapore to Wuhan turns back after 'technical issue' detected
NASA delays first Artemis astronaut flight to late 2025, moon landing to 2026
More women join challenge to Tennessee’s abortion ban law
Coach Erik Spoelstra reaches record-setting extension with Miami Heat, per report