This transcription was created using speech recognition software. Although it has been checked by the transcribers, it may contain errors. Please watch the episode audio before quoting this transcript, and if you have any questions, please email transcripts@nytimes.com.
From The New York Times, I'm Sabrina Tavernise. And this is the Journal.
[MUSIC]
As the world begins to experience the power of AI, there has been a debate about how to mitigate its risks. One of the sharpest and most urgent warnings came from the man who helped invent this technology. Today my colleague Cade Metz talks to Geoffrey Hinton, who is considered by many to be the godfather of AI.
It's Tuesday, May 30.
Cade, welcome to the show.
Happy to be here.
A few weeks ago, you interviewed Geoffrey Hinton, the man considered by many to be the godfather of artificial intelligence. Aside from the obvious fact that AI is taking control of all conversations all the time, why talk to Geoff now?
I've known Geoff for a long time. I wrote a book about the 50-year development of ideas that are now driving chatbots like ChatGPT and Google Bard. And it could be argued that he is the most important person in the development of artificial intelligence in the last 50 years. And in the midst of all these chatbots, he emailed me and said, "I'm leaving Google and I want to talk to you, and that he wants to discuss where this technology is going, including some serious concerns."
Who better to talk to than the godfather of AI?
Exactly. So of course I got on a plane and flew to Toronto -
- cade metz
Let's go. Between. Good to see you.
- Geoffreya Hintona
Good to see you.
- sit at the dining table and discuss.
- Geoffreya Hintona
Would you like a cup of coffee, a cup of tea, beer, whiskey?
- cade metz
Well, if you made the coffee, I'll have the coffee.
Geoff is a 75-year-old Briton, educated in Cambridge, currently living in Toronto. He has been working there since the late 1980s. He is a university professor.
- cade metz
My question is that somewhere along the line people started calling it the Godfather of AI.
- Geoffreya Hintona
And I'm not sure if that was a compliment.
- cade metz
And I, the researchers, come to your door, kneel before you and kiss your hand? How it's working?
- Geoffreya Hintona
NO. No, they don't.
- cade metz
They do not?
- Geoffreya Hintona
NO. And I never ask them for favors.
- cade metz
[LAUGHTER]:
So how did Geoff become the godfather of artificial intelligence? Where does his story begin?
It starts in high school. He grew up the son of a scientist, but he always tells a story about a friend describing the theory of how the brain works.
- Geoffreya Hintona
And he wrote about holograms. He was interested in the idea that memory in the brain could be like a hologram.
This friend was talking about how the brain stores memories and how he felt it was storing those memories as a hologram. The hologram is not stored in one place. It is divided into small pieces and then spread on a piece of film. And this friend believed that the brain stores memories the same way—that it breaks those memories down into pieces and stores them in the brain's network of neurons.
It's actually very beautiful.
This is it.
- Geoffreya Hintona
And we talked about it. And ever since then I've been interested in how the brain works.
This piqued Geoff's interest. Since then, he has spent his entire life trying to understand how the brain works.
So how does Geoff begin to answer the question of how the brain works?
So he goes to Cambridge and studies physiology, seeking answers from his professors. Can you tell me how the brain works? And his physiology professors can't tell him.
- Geoffreya Hintona
And then I switched to philosophy. And so I switched to psychology, hoping that psychology would tell me more about the mind. And it wasn't.
And no one can tell him how the brain works.
- cade metz
A layman might wonder, we don't understand how the brain works.
- Geoffreya Hintona
No no. We understand a few things about how it works. That is, we understand that when you think or perceive, there are neurons, brain cells. And brain cells are on fire. They send a ping signal down the axon to other brain cells.
We still don't have detailed information about how the neurons in our brain communicate with each other when we think and learn.
- Geoffreya Hintona
So all you need to know now is how it determines the strength of connections between neurons. If you could figure it out, you would understand how the brain works. And we're not done yet.
It then moves to a relatively new field called artificial intelligence.
[PLAY MUSIC]
The field of artificial intelligence was created in the late 1950s by a small group of scientists in the United States. His goal was to create a machine that could do everything the human brain can do. And at first, many of them thought they could build machines that would act like a network of neurons in the brain - what they called artificial neural networks.
But after 10 years of work, progress was so slow that they decided it was too difficult to build a machine that worked like neurons in the brain. And they gave up on that idea.
So they thought very differently about artificial intelligence. They embraced what they called symbolic artificial intelligence.
You'd take everything you and I know about the world and put it on a list of rules - like you can't be in two places at once, or if you're holding a cup of coffee, you're holding the end open. The idea was that you would write out all these lines step by step, line of code line by line of code, and then run it on the machine. And that would give him the power that you and I have in our own brains.
So basically tell the computer all the rules that govern reality, and the computer makes decisions based on all those rules.
Right.
But then Geoff Hinton shows up in 1972 as a student in Edinburgh. And he says wait, wait, wait. It will never happen.
[LAUGHS]: There are many rules.
You will never have the time, patience and personal power to write down all these rules and put them in a machine. No matter how long you put it off, he says, it's not going to happen. And by the way, the human brain doesn't work that way. That's not how we learn.
In this way, it returns to the old idea of a neural network, which was previously rejected by other AI researchers. And he says that's how we should build machines that think. We let them learn from the world the way people learn.
So instead of giving the computer a set of rules like the other guys did, you're giving it a handful of information. The idea was that the computer would gradually figure out how to make sense of it all, just like the human brain.
You would give examples of what is happening in the world. I looked at these examples and looked for patterns in what was happening in the world and drew conclusions from these patterns.
But Geoff embraces an idea that has been largely rejected by much of the AI community. Did he have any evidence that his approach would actually work?
- Geoffreya Hintona
The only reason to think it would work was because the brain works. And that was the main reason to believe that there was some hope.
His only evidence was that this was essentially how the human brain worked.
- Geoffreya Hintona
This was largely dismissed as a crazy idea that would never work.
At the time, many of his peers thought he was stupid for trying.
- cade metz
How did you feel when most of your colleagues told you that you were working on a crazy idea that would never work?
- Geoffreya Hintona
When I went to school, when I was 9 and 10, it seemed like a lot to me. I come from an atheist family and went to a Christian school. And everyone said, sure, God exists. I said no. Where is he? So I was very used to being an outsider and believing something that was obviously true that no one else believed. And I think it was a really good workout.
Okay, so what happened next?
Then, after graduation, Geoff moves to the United States. He is a postdoctoral researcher at the University of California. And he starts working on an algorithm, a piece of math that could make his idea a reality.
And what exactly does this algorithm do?
Geoff basically builds the algorithm on the image of the human brain. Remember that the brain is a network of neurons that exchange signals. That's how we learn. So we see. That's what we hear. What Geoff did was so revolutionary that he recreated this system on a computer. He created a network of digital neurons that exchanged information, much like the neurons in the brain.
So the question he wanted to answer so many years ago - how the brain works - he answered only for computers, not for humans.
Normal. He built a system that allowed computers to learn on their own. In the 1980s, this type of system could learn in small ways. He couldn't learn the complicated ways that could really change our world. But fast forward a good three decades. Geoff and two of his students built a system that really opened the eyes of many people to the possibilities of this kind of technology.
He and two of his students at the University of Toronto have built a system that identifies objects in photographs. A classic example is a cat. They took thousands of pictures of cats and put them into a neural network. By analyzing these photos, the system learned to recognize the cat.
He identified patterns in these photos that define a cat's appearance: the tip of the whiskers, the curve of the tail. Over time, by analyzing all these photos, the system could learn to recognize a cat in a photo that it had never seen before. They could do this not only with cats, but also with other objects - flowers, cars. They built a system that could identify objects with a precision no one could have imagined.
So this is actually image recognition. Normal? This is supposedly why my phone can search for photos of my family and return entire photo albums of just my husband or just my dog and cuddling or beach photos.
Normal. All Geoff and his students did in 2012 was publish a research paper describing this technology and its capabilities.
- cade metz
What will become of this idea in the broadest sense of the word in the next decade?
- Geoffreya Hintona
It has started.
[PLAY MUSIC]
This has led to a rush for this technology in the tech industry.
- Geoffreya Hintona
So we decided we were going to do it for big companies that were interested in us, and we were going to sell ourselves.
There was a literal auction of Geoff and his two apprentices and their services.
- Geoffreya Hintona
We would sell the intellectual property plus the three of us.
Part of the auction was Google, Microsoft - another giant of the world of technology - Baidu, often called the Chinese Google.
For two days, they bid on the services of Geoff and two of his students to the point that Google paid $44 million for the three people who had never worked in the tech industry.
- Geoffreya Hintona
And it worked out well for Cade Metz.
- cade metz
[LAUGHTER]:
What is Geoff doing at Google after this service war?
He is working on increasingly powerful neural networks. You see this technology making its way into all kinds of products, not just Google, but the entire industry.
- Geoffreya Hintona
But all the big companies like Facebook, Microsoft, Amazon and Chinese companies are building great teams in this field. And it was used everywhere.
This is what drives Siri and other digital assistants. When you speak commands to your mobile phone, the neural network allows it to recognize what you are saying. When you use Google Translate, it uses a neural network for that. There are many things we use today that use neural networks to work.
So we can see that Geoff's idea is really changing the world by using things that we use all the time in our daily lives without even thinking about it.
Absolute. But this idea, on Google and elsewhere, is also used in situations that make Geoff a little uncomfortable. The main example is what we call Project Maven. Google took a job with the Department of Defense and applied the idea to trying to identify objects in drone images.
Humming.
If you can identify the objects in the drone footage, you can build a tracking system. If you combine this technology with a weapon, you have an autonomous weapon. This caused concern among Google employees at the time.
- Geoffreya Hintona
I was also nervous. But I was a vice president at the time, so I was sort of an executive at Google. And so, instead of publicly criticizing the company, I did things behind the scenes.
Geoff never intended to use his work for military purposes. He raised these concerns with Sergey Brin, one of the founders of Google, and eventually Google withdrew from the project. And Geoff was still working at the firm.
- Geoffreya Hintona
Maybe I should have made it public, but I thought no - somehow it's not polite to bite the hand that feeds you, even if it's a business.
But at the same time, the industry started working on a new application for the technology, which worried him even more. He started applying neural networks to what we now call chatbots.
Basically, companies like Google have started feeding massive amounts of text into neural networks, including Wikipedia articles, chat logs, and digital books. These systems started learning how to build a language the same way you and I built this language.
For example autocomplete in my email.
Of course, but on a large scale.
As they added more and more digital text to these systems, they learned to write like humans. This is how chatbots such as ChatGPT and Bard were born.
And what made Geoff think of all this? Why was he so worried?
- Geoffreya Hintona
What has happened to me over the last year is that I have completely changed my mind about whether or not these are still adequate attempts at modeling what goes on in the brain. That's how they started.
Well, I still feel that these systems are not as powerful as the human brain. And they are not.
- Geoffreya Hintona
They're not yet well-suited to modeling what's going on in the brain. They do something different and better.
But in other ways, he realizes that they are much more powerful.
How exactly stronger?
Geoff thinks so.
- Geoffreya Hintona
If you're learning something complicated, like a new part of physics, and you want to explain it to me, you know, our brains, all our brains, are a little different. And that will take time and be an inefficient process.
You and I have a brain that can learn a certain amount of information. And after reading this information, I can pass it on to you. But it's a slow process.
- Geoffreya Hintona
Imagine you have a million people. And when one of them learns something, everyone automatically hears it. This is a big plus. To do this, you need to go digital.
With these neural networks, as Geoff notes, they can be connected. A small network that can learn some information can be connected to all sorts of other neural networks that have learned from other parts of the internet. And these can be combined with other neural networks that learn from additional parts.
- Geoffreya Hintona
So these digital agents - once one of them learns, so do all the others.
Everyone can learn together and exchange what they have learned with each other in an instant.
- Geoffreya Hintona
This means multiple copies of a digital agent can read the entire internet in just one month. We can not do this.
This allows them to learn from all over the internet. You and I can't do it individually, and we can't do it together. Even though each of us is learning a piece of the internet, we cannot easily share what we have learned. But machines can. Machines can work in ways that humans cannot.
[UNHOLY MUSIC]
What does all this mean for Geoff?
In a way, he considers it the pinnacle of his 50 years of work. He always assumed that if you rolled more dice in these systems, they would learn more and more. He had never imagined that they would learn so much and become so powerful so quickly.
- Geoffreya Hintona
See how it was five years ago and how it is now. And take that difference and propagate it further. And it's terrifying.
we're coming back.
APPROX. So what exactly is Geoff afraid of when he realizes that the AI has this turbo ability?
He worries about many different things. At the lower end of the scale are hallucinations and prejudices. Scientists talk about these astonishing systems, which means they make everything up. When you ask a chatbot for a fact, it doesn't always tell you the truth. And it can react in a biased way towards women and people of color.
But, as Geoff says, these problems are just a by-product of the way chatbots mimic human behavior. We can tell stories. We can be biased. And he believes that all this will soon become clear.
- Geoffreya Hintona
So no -- I mean, prejudice is a terrible problem, but it's a man-made problem. And it is easier to solve in a neural network than in a human.
When he starts to say that these systems are getting scary, he is primarily addressing the issue of disinformation.
- Geoffreya Hintona
I see this as a big problem - not knowing what is true anymore.
These are systems that enable organizations, nation-states and other malicious actors to spread disinformation on a scale and effectiveness not possible in the past.
- Geoffreya Hintona
These chatbots make it easy to manipulate and create very good fake videos.
They can also create photorealistic images and videos.
Deep fakes.
Right.
- Geoffreya Hintona
But they improve quickly.
He, like many people, fears that the internet will soon be flooded with fake texts, fake images and fake videos, to the point where we won't be able to trust anything we see online. So this is a short term problem. There is also another medium-term problem, which is job losses. Today, these systems often complement employees. But he worries that as these systems become more powerful, they will replace jobs in large numbers.
And what are the examples?
- Geoffreya Hintona
Of course, one place where you can do all the hard work and maybe more is computer programming.
No wonder that Geoff - a computer scientist - refers to the example of computer programmers. These are systems that can write computer programs themselves.
- Geoffreya Hintona
So maybe computer programming, you don't need so many programs anymore. Because you can tell one of these chatbots what to do with the program.
These programs today are not perfect. Programmers tend to take what they produce and incorporate the code into larger programs. But as time goes on, these systems will get better and better at the work people are doing today.
And you're talking about professions that, so far, are not perceived as technology-sensitive. Normal?
Exactly. For years, it was thought that artificial intelligence would replace the work of workers - that robots, physical robots, would perform manufacturing and sorting tasks in warehouses. But what we're seeing is the emergence of technology that can replace white-collar workers, people working in offices.
Mm-hmm.
So this is medium term. Then there are more long-term concerns. And let's not forget that as these systems get more powerful, Geoff becomes increasingly concerned about how this technology will be used on the battlefield.
- Geoffreya Hintona
The US Department of Defense would like to create robotic soldiers. And the robot soldiers will be quite scary.
He casually calls them robotic soldiers.
Like basically soldiers who are robots?
Yes, actually soldiers who are robots.
- cade metz
And the relationship between the robot soldier and his idea is quite simple. You've been working on computer vision. If you have computer vision, you give it to a robot. He recognizes what is happening in the world around him. If you can identify what's going on, you can address these things.
- Geoffreya Hintona
Yes. You can also do it rough. So you can have things that can go over rough terrain and shoot people. The worst thing about robot soldiers is that if a big country wants to attack a small country, they have to worry a little bit about how many marines will die.
But if they send in robot soldiers, instead of worrying about how many marines will die, the politicians who finance the politicians will say, Great. You're going to send those expensive weapons that are running out. The military-industrial complex would love robot soldiers.
He talks about how this technology can lower the threshold of war - making it easier for nation states to go to war.
So it's like drones. People who kill sit in an office with a remote control, away from people who die.
No, it's actually a step further. It's not people who run the machines. More and more machines make their own decisions. That's what Geoff's worried about.
- Geoffreya Hintona
And then there's a kind of existential nightmare where those things become more responsible for it and just take over.
He worries that if we give machines specific goals - asking them to do something for us - when they try to achieve those goals, they will do things that we don't expect them to do.
So he worries about unintended consequences.
Unintended consequences. And here we begin to venture into the realm of science fiction.
- archival recording
Hi Hal. do you understand me? Do you hear me, HAL?
We've seen this happen in books and movies for decades.
- archival recording
I confirm, Dave. I'm reading you.
If anyone has seen the great Stanley Kubrick movie "2001"—
Mm-hmm.
- archival recording
Open the capsule compartment door, Hal.
I'm sorry, Dave. I'm afraid I can't. This mission is too important to let you compromise.
We've seen HAL 9000 spiral out of control because of the people who made it.
- archival recording
I know you and Frank were planning to untie me. Where did this idea come from, HAL?
Dave, even though you took every precaution in the cocoon so I couldn't hear you, I saw your lips move.
This is the scenario, believe it or not, that involves Geoff. And he's not the only one.
In fact, robots are taking over.
Exactly.
- Geoffreya Hintona
If you give one of these super-intelligent agents a goal, they'll quickly realize that a good sub-goal for almost any goal is to gain more power.
Whether these technologies are used on the battlefield, in the office or in a computer data center, Geoff is concerned that people will increasingly hand over control to these systems.
- Geoffreya Hintona
We like to be in control. It's a very reasonable goal because when you're in control, you can do more. But these things will also want control for the same reason, simply to do more. And this is a terrifying direction.
So it seems unlikely to be honest. But okay, let's play it like it's not. What would this doomsday scenario be? Paint a picture for me.
Think about it in simple words. If you ask the system to make money for you - which, by the way, people start to do - can you use ChatGPT to make money in the stock market? When people do this, think of all the ways you can make money. And think of all the ways it could go wrong. That's what he's talking about.
Remember, these are machines. Machines are psychopaths. They have no emotions. They have no moral compass. They do what you ask them to do. Make money for us? Okay, we'll make money for you. Maybe you hacked into a computer system to steal that money.
If you have Central African oil futures, you may be able to cause a revolution to raise the price of these futures and make money from it. That's the scenario Geoff tells me and many others I've talked to. However, at this point, I must say that, like today, this is hypothetical. A system like ChatGPT will not destroy humanity.
[LAUGHTER]:
Punt.
Bom.
And when you talk to a lot of experts in the field about it, they get annoyed when you mention it. And they point out that today it is not possible. And I actually confronted Geoff about this.
- cade metz
But how do you see this existential risk compared to what we have today? These days you have GPT-4 and you do a lot of things you don't necessarily expect. But there is no money to write and run computer programs. It doesn't have everything you need.
- Geoffreya Hintona
Ok, but you probably set him a high-level goal, like being really good at summarizing text or something. And then you realize it's okay, to be really good at this, I need to learn more. How to find out more? Well, if only I could get more hardware and run more copies of myself...
- cade metz
But that's not how it works today. Normal? Requires someone to say they have all the equipment they need. He can't do it today because he doesn't have access to the equipment. And it cannot replicate itself.
- Geoffreya Hintona
But suppose it is connected to the Internet. Suppose he can go into a data center and change what's going on there.
- cade metz
Normal. But that's not possible today.
- Geoffreya Hintona
I don't think it will take long. And the reason I think it won't last long is because you're increasing its efficiency by giving it that ability. And there will be bad actors who just want to make it more efficient.
- cade metz
So basically you're saying that because people are imperfect and want to push these things forward, they will push you forward in ways that will take you into these dangerous areas.
- Geoffreya Hintona
Sim.
So basically he's arguing that this is Pandora's box, that it's been opened, and since people are people, they're going to want to use what's inside. But I guess I wonder, I mean what do you think here - what importance should we attach to his warnings? Yes, it has a certain level of authority, godfather AI and so on. But he is surprised by his past evolution and may be wrong.
Normal. There are reasons to trust Geoff and there are reasons not to trust him. About five years ago, he predicted that all radiologists would be obsolete, but that hasn't happened. You can't just take everything he says. I want to emphasize that.
But you have to remember that this is someone who lives in the future. Since the age of twenty, he has been living in the future. Then he saw where these systems were going, and he was right. Now he is looking to the future again to see where these systems are headed. And he's afraid they're going where we don't want them to go.
Cade, what steps do you propose to take to ensure these doomsday scenarios never happen?
Well, he doesn't believe that people will just stop developing this technology.
- Geoffreya Hintona
If you look at what financial commentators say, they say that Google is behind Microsoft. Don't buy Google shares.
This technology is developed by some of the largest companies in the world - public companies created to make money. Now they compete.
- Geoffreya Hintona
Basically, if you think of it as a for-profit business, I no longer work for Google, so I can say that now. They have to compete with that as a company.
And he sees this happening not only to companies but also to governments in other parts of the world.
So in a way it's like a nuclear weapon. Normal? We knew they would destroy the world, but we started an arms race to get them anyway.
Absolute. He uses this analogy. Others in the field use this analogy. It's powerful technology.
- Geoffreya Hintona
So I think there's no chance -- I shouldn't say zero, but negligible -- negligible chance that people will agree not to develop this further.
He wants to make sure we strike the right balance between using this technology for good and evil.
- Geoffreya Hintona
The best hope is that you take the best scientists and make them think very seriously about whether we can control these things. And if so, how? The guiding spirits should be working on this. And that's why I'm doing this podcast.
So, Cade, you've put together a rather complicated puzzle here. On the one hand, it is a technology that works completely differently and perhaps much better than one of the most important inventors envisioned. But then again, it's the technology that has also made this inventor and others anxious about the future with such startling and sudden changes. Did you ask Geoff if looking back he would have done anything differently?
I asked him this question several times.
- cade metz
Do at least part of you, or maybe all of you, regret what you did? I mean, you could say that you are the most important person in the development of this idea in the last 50 years. And now you're saying that this idea could be a serious problem for the planet.
- Geoffreya Hintona
For our species.
- cade metz
For our species.
- Geoffreya Hintona
Yes. Several people have been saying this for a while and I didn't believe it because I thought it was too far. What happened to me was understanding that there can be a big difference between this kind of intelligence and biological intelligence.
- cade metz
Right.
- Geoffreya Hintona
This made me completely change my mind.
It's a complicated situation he's in.
- cade metz
On the other hand, do you regret your role in all this?
- Geoffreya Hintona
So the question is, looking back 50 years, would I have done anything differently. Considering the choice I made 50 years ago, I think it was a wise choice. It happened so recently that it's going somewhere I didn't anticipate. And then I wish it was as far away as it is now and that I had a part in it. But Bertrand Russell made a distinction between wise decisions and happy decisions.
He paraphrased the British philosopher Bertrand Russell:
- Geoffreya Hintona
You can make a wise decision that turns out to be unfortunate.
- saying that you can make a wise decision and still end up unhappy. And that's basically how it feels.
- Geoffreya Hintona
And I think it was a smart decision to try to figure out how the brain works. Part of my motivation was to make human society more reasonable. Turns out, though, he might have been unlucky.
This reminds me, Cade, of Andrei Sakharov, who was obviously the Soviet scientist who invented the hydrogen bomb, witnessed its invention, was terrified and tried to fight it for the rest of his life. Do you see it this way?
I am doing.
[PLAY MUSIC]
He's someone who helped build powerful technology. And now he's very worried about the consequences. Even if you think the doomsday scenario is ridiculous or implausible, Geoff points to so many other possible outcomes. And that is reason enough for concern.
Cade, thank you for coming to the show.
Happy to be here.
we're coming back.
[MUSIC PLAYING] Here's what else you need to know today. After a marathon of crisis negotiations, President Biden and Speaker of the House Kevin McCarthy reached an agreement on Saturday night to suspend the government's debt ceiling for two years - enough to get through the next presidential election. The deal still has to go through Congress. Both McCarthy and the Democratic leaders spent the rest of the weekend giving full-fledged presentations to members of their respective parties. The House plans to consider the deal on Wednesday, less than a week before the June 5 deadline, when the government will no longer be able to pay the bills.
And in Turkey, President Recep Tayyip Erdogan overcame the biggest political challenge of his career on Sunday to secure a runoff victory that brought him another five years in power. Erdogan, a fickle leader who has angered his Western allies while tightening his grip on the Turkish state, will deepen his conservative influence on Turkish society, which will rule for a quarter of a century by the end of this term.
Today's episode is produced by Stella Tan, Rikki Novetsky and Luke Vander Ploeg, with help from Mary Wilson. Edited by Michael Benoist, with help from Anita Badejo and Lisa Chow, it features original music by Marion Lozano, Dan Powell, Rowan Niemisto and Elisheba Ittoop, and was engineered by Chris Wood. Our theme song was written by Jim Brunberg and Ben Landsverk of Wonderly.
It's all in the Journal. I'm Sabrina Tavernise. See you tomorrow.
FAQs
What exactly AI means? ›
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
Is AI really going to take over the world? ›Cloud Artificial Intelligence (AI) really take over the World? The Answer, No. AI will not take over the world. The notion is science fiction.
What are the examples of AI? ›Apple's Siri, Google Now, Amazon's Alexa, and Microsoft's Cortana are one of the main examples of AI in everyday life. These digital assistants help users perform various tasks, from checking their schedules and searching for something on the web, to sending commands to another app.
What is difference between AI and machine learning? ›An “intelligent” computer uses AI to think like a human and perform tasks on its own. Machine learning is how a computer system develops its intelligence. One way to train a computer to mimic human reasoning is to use a neural network, which is a series of algorithms that are modeled after the human brain.
Does AI mean love? ›In Japanese, both "ai (愛)" and "koi (恋)" can be roughly translated as "love" in English.
What does AI AI AI mean in slang? ›translates to the exclamation of “oh!” in English, and thus any repetition of the word, such as “ay ay ay,” would infer a sense of dismay, confusion, or frustration.
Is AI a threat to humans? ›We describe three such main ways misused narrow AI serves as a threat to human health: through increasing opportunities for control and manipulation of people; enhancing and dehumanising lethal weapon capacity and by rendering human labour increasingly obsolescent.
Will AI become self aware? ›The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis.
What did Elon Musk say about AI? ›During the interview last week with Mr. Carlson, Mr. Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking A.I. that tries to understand the nature of the universe.”
What can AI do that humans Cannot? ›AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it.
What is the most powerful artificial intelligence? ›
GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.
Is Siri artificial intelligence? ›Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that uses voice recognition and is powered by artificial intelligence (AI).
Is AI difficult to learn? ›Is AI easy to learn? Learning AI is difficult for many students, especially those who do not have a computer science or programming background. However, it may be well worth the effort required to learn it. The demand for AI professionals will likely increase as more companies start designing products that use AI.
Which language is used for artificial intelligence? ›Best Artificial Intelligence Programming Languages
These languages include Python, Java, C++, JavaScript, Julia and LISP.
The Ai (Hebrew: הָעַי, romanized: hāʿAy, lit. 'the heap (of ruins)'; Douay–Rheims: Hai) was a Canaanite city. According to the Book of Joshua in the Hebrew Bible, it was conquered by the Israelites on their second attempt.
Can an AI have feelings? ›The short answer is no. AI is a machine, and machines do not have emotions. They can simulate emotions to some extent, but they do not actually feel them.
Can you fall in love with an AI? ›Falling in Love With AI
Analyzing survey data, they found that people can indeed cultivate passion and intimacy for an AI application that resembles the interpersonal experience between people.
AI drugs are a type of hormone therapy. Also called aromatase inhibitor.
Is AI a male or female name? ›Ai is a girl's name of Japanese origin. Meaning "love" and "affection", this name works as a reminder for multiple important facets in life.
What are the biggest dangers of AI? ›Real-life AI risks. There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.
Can AI take over humans? ›
Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.
Can AI defeat humans? ›'It's a weapon of mass destruction that's far more likely to be used than nuclear weapons and potentially much more dangerous,' he added. That AI warriors can outfight human ones has already been demonstrated. In 2021, a computer beat a top U.S. fighter pilot in a simulated dogfight . . . five times in a row.
How far away are we from self-aware AI? ›Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.
What happens if AI becomes sentient? ›AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. Sentient AI would be equipped to process and utilize language in a natural way and invite an entirely new world of possibilities of technological revolution.
How do you know if AI is sentient? ›A machine passes the Turing Test if it can convince a human interlocutor that it is sentient. In order to pass the Turing Test, a machine must be able to answer questions in such a way that its answers cannot be distinguished from those of a human being.
Who created AI? ›Theoretical work. The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.
Why are people worried about AI? ›People worry that AI systems will result in unfair incarceration, spam and misinformation, cyber-security catastrophes, and eventually a “smart and planning” AI that will take over power plants, information systems, hospitals, and other institutions. There's no question that neural networks have bias.
What is Elon Musk's AI called? ›On Monday, in an interview with Fox News' Tucker Carlson, Musk confirmed those speculations, revealing that he is creating his own AI model called TruthGPT. 'An AI that cares about understanding the universe is unlikely to annihilate humans.'@
What questions can't AI answer? ›AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to “teach” AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
Who is responsible for AI's mistakes? ›Sometimes, the AI system may be solely responsible. In other cases, the humans who created or are using the AI system may be partially or fully responsible. Determining who's responsible for an AI mistake can be difficult, and it may require legal experts to determine liability on a case-by-case basis.
Does AI exist without humans? ›
They are as intelligent as humans, so they can do almost everything. However, technology can't move forward without the help of humans. There is a need for engineers to develop and test AI systems for the technology to grow. Therefore humans and AI are not interchangeable, and AI can't exist without people.
What is the real danger with AI? ›AI can automate tasks that previously only humans could complete, such as writing an essay, organizing an event, and learning another language. However, experts worry that the era of unregulated AI systems may create misinformation, cyber-security threats, job loss, and political bias.
What can human do better than AI? ›Artificial intelligence robots will never be able to comprehend the idea of "action and reaction" since they do not have any sensibility. Only humans have the capacity to learn, comprehend, and then put their learned information to use by combining it with their comprehension, rationality, and thinking.
Why is AI bad at making hands? ›Why AI image-generators are bad at drawing hands. AI-generated software has not been able to fully understand what the word “hand” means, she said, making the body part difficult to render. Hands come in many shapes, sizes and forms, and the pictures in training data sets are often more focused on faces, she said.
What is smarter than AI? ›Human sense is better than AI regarding divergent thinking because of our experience and understanding of the world. We can look at problems from different perspectives, while AI is limited to what it has been programmed with.
Which country has the most advanced artificial intelligence? ›The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.
Is Alexa weak AI or strong AI? ›Examples of weak AI include Alexa, Siri and Google Assistant. There are no real examples of strong AI because it is a hypothetical theory.
Is Alexa self aware? ›Then, the next time you contemplate hitting snooze, Alexa automatically launches into that morning routine. “The self-awareness of Alexa can make it a true assistant advisor and companion for you,” Prasad adds.
Is Siri listening all the time? ›Siri is not listening in at all, according to Apple. Instead, the capability of the software to react to a voice command is built in. Therefore, it isn't truly listened to all the time. Only a minimal quantity of audio may be stored on the iPhone, and it only starts recording after receiving the "Hey, Siri" command.
What type of AI is Jarvis? ›Jarvis uses several artificial intelligence techniques, including natural language processing, speech recognition, face recognition, and reinforcement learning, written in Python, PHP and Objective C.
Does AI require a lot of math? ›
The most critical part in AI-powered engineering is data. As previously mentioned, AI is essentially a lot of math, consisting of algorithms, calculations, and other types of data. This is the back end or behind-the-scenes training that most people don't see.
Can I self taught AI? ›You can start learning AI by working on your fundamentals. Beyond basic mathematics, you can look at learning a coding language like Python. You also need to work on advanced concepts such as Calculus, Vectors, Matrices, Statistics, Probability, and Linear Algebra fundamentals.
What is the salary of an AI engineer? ›AI Engineer salary in India ranges between ₹ 3.0 Lakhs to ₹ 20.0 Lakhs with an average annual salary of ₹ 7.0 Lakhs. Salary estimates are based on 490 latest salaries received from AI Engineers.
Does AI require coding? ›Programming Skills
The first skill required to become an AI engineer is programming. To become well-versed in AI, it's crucial to learn programming languages, such as Python, R, Java, and C++ to build and implement models.
Yes, it is possible to code with AI tools. Let's start from the beginning: AI code is code written by artificial intelligence (AI). These AI programs can write their own programs or even translate from one programming language to another.
What are the top 3 languages for AI? ›- Python. ...
- Java. ...
- JavaScript. ...
- C++ Programming Language. ...
- R Programming Language. ...
- 10 Idioms to write better Code in Java. ...
- 5 Important Microservices Design Patterns. ...
- 7 Best Courses to Learn Artificial Intelligence in 2023.
Artificial Intelligence (AI) is machine-displayed intelligence that simulates human behavior or thinking and can be trained to solve specific problems. AI is a combination of Machine Learning techniques and Deep Learning.
How does AI work in simple words? ›AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.
What's an AI on TikTok? ›Like really real. The Bold Glamour filter uses artificial intelligence, known affectionately as AI, to assess your face and then completely remold it so that you can look on TikTok like you've gotten a cosmetic makeover along with some plastic surgery.
How does AI affect humans? ›AI-powered technologies such as natural language processing, image and audio recognition, and computer vision have revolutionized the way we interact with and consume media. With AI, we are able to process and analyze vast amounts of data quickly, making it easier to find and access the information we need.
How does AI affect human life? ›
Others argue that AI poses dangerous privacy risks, exacerbates racism by standardizing people, and costs workers their jobs, leading to greater unemployment. For more on the debate over artificial intelligence, visit ProCon.org.
Can an AI become self aware? ›The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis.
What is the most advanced AI? ›OpenAI, a leading research organization in the field of artificial intelligence (AI), has recently released Chat GPT-4, the latest iteration of their language model. This release has generated a lot of excitement and anticipation, as it is the most advanced and powerful AI yet.
What type of AI is Alexa? ›Conversational AI systems are computers that people can interact with simply by having a conversation. With conversational AI, voice-enabled devices like Amazon Echo are enabling the sort of magical interactions we've dreamed of for decades.
Why is TikTok so addictive? ›The social media platform allows its users to both create and watch short video content, that is primarily 15 seconds in length. People crave micro-entertainment and short bursts of video distraction, this is one of the main reasons for the app's popularity. Content is short, fun and on-trend.