December 25, 2024

Professor Yoshua Bengio at the One Youth World Summit in Montreal, Canada on Friday, September 20, 2024

Renowned computer scientist Yoshua Bengio, a pioneer in artificial intelligence, has warned that the emerging technology could have negative impacts on society and called for more research to mitigate its risks.

Bengio, a professor at the University of Montreal and director of the Montreal Institute for Learning Algorithms, has received multiple awards for his work in deep learning, a subset of artificial intelligence that attempts to imitate the activity of the human brain to learn how Recognize patterns in complex information data.

But he has concerns about the technology, warning that some with “enormous power” may even want to Seeing humans replaced by machines.

“It’s really important going forward that we have machines that are as smart as us in many ways and what that means for society,” Bengio told CNBC’s Tania Bryer at the One Young World summit in Montreal.

He said machines will soon have most of the cognitive abilities of humans – Artificial General Intelligence (AGI) is a type of artificial intelligence technology designed to equal or exceed human intelligence.

“Intelligence gives power. So who controls that power?” he said. “Having a system that understands more than most people can be dangerous if it falls into the wrong hands and create more instability at a geopolitical level, such as terrorism.”

Bengio said a limited number of organizations and governments will have the ability to build powerful AI machines, and the larger the systems, the smarter they will become.

“You know, these machines cost billions of dollars to build and train, and very few organizations and countries can do that. That’s already the case,” he said.

“There will be a concentration of power: economic power, which might be bad for markets; political power, which might be bad for democracy; military power, which might be bad for the geopolitical stability of our planet. So, there are a lot of things that we need to look at carefully and start mitigating as quickly as possible. Open question.

We don’t have a way to make sure these systems don’t hurt people or turn against people… We don’t know how to do that.

Joshua Bengio

Director of the Montreal Institute for Learning Algorithms

Such an outcome is possible within decades, he said. “But if it’s five years, we’re not ready… because we don’t have a way to make sure these systems don’t harm people or don’t turn against people… we don’t know how to do that,” he said added.

Bengio said there is a view that the current way artificial intelligence machines are trained “will lead to systems working against humans.”

“Also, some people may want to abuse that power, and some people may be happy to see humans replaced by machines. I mean, it’s a margin, but these people can have a lot of power, and they can unless we set it up now The right guardrails or else do this,” he said.

Artificial Intelligence Guidance and Supervision

Bengio agrees open letter The June headline read: “The Right to Warn About Advanced Artificial Intelligence.” It was signed by current and former employees of Open AI, the company behind the viral AI chatbot ChatGPT.

The letter warns of “serious risks” associated with the development of artificial intelligence and calls on scientists, policymakers and the public to provide guidance to mitigate these risks. In the past few months, OpenAI has been subject to increasing security concerns, and its “AGI Readiness” team was disbanded in October.

“The first thing the government needs to do is put regulations in place to force (companies) to register when they set up these border systems, which, like the largest systems, cost hundreds of millions of dollars to train,” Bengio told CNBC. “The government should know where they are, you know, the specifics of these systems.”

Bengio said that because artificial intelligence is developing so quickly, governments must be “a little creative” and develop legislation that can adapt to technological changes.

It is not too late to guide society and humanity in a positive and beneficial direction.

Joshua Bengio

Director of the Montreal Institute for Learning Algorithms

The computer scientist said companies developing artificial intelligence must also be held accountable for their actions.

“Liability is also another tool that can force (companies) to behave well because … if their money is involved, the fear of being sued – that will motivate them to take action to protect the public. If they know they can’t be sued, because now It’s a gray area and then their behavior is not necessarily going to be good,” he said. “(Companies) are competing against each other, and, you know, they think whoever gets to AGI first is going to dominate. So it’s a race, and a dangerous race.”

Bengio said the legislative process to ensure the safety of artificial intelligence would be similar to the way rules are written for other technologies such as airplanes or cars. “To reap the benefits of artificial intelligence, we have to regulate. We have to put guardrails in place. We have to have democratic oversight of how the technology develops,” he said.

misinformation

“The hardest question”

The “hardest question,” Bengio said, is: “If we create entities that are smarter than us and have their own goals, what does that mean for humanity? Are we at risk?”

“These are very difficult and important questions, and we don’t have all the answers. We need more research and preventive measures to mitigate potential risks,” Bengio said.

He urged people to take action. “We have agency. It is not too late to steer society and humanity in a positive and beneficial direction,” he said. “But for that, we need enough people to understand the benefits and risks, and we need enough people to work on solutions. Solutions can be technical, they can be political… policy, but We need enough efforts to move in these directions now,” Bengio said.

—CNBC’s Hayden Field and Sam Shead contributed to this report.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *