A human-level AI named Gato that can do 604 distinct tasks across a wide range of environments was revealed earlier this week by Google’s DeepMind.
DeepMind, a Google-owned British company, may be close to achieving human-level artificial intelligence (AI).
In terms of addressing the most difficult hurdles in the race to create artificial general intelligence (AGI), Nando de Freitas, a research scientist at DeepMind and a machine learning professor at Oxford University, has declared ‘the game is over’.
A machine or software with AGI can grasp or learn any intellectual task that a human can, and it can do so without any training.
According to De Freitas, scientists are now working to scale up AI programs, such as by adding more data and computing capacity, in order to construct an AGI.
DeepMind revealed Gato, a new AI ‘agent’ that can do 604 distinct tasks ‘across a wide range of environments’ earlier this week.
Gato employs a single neural network, which is a computational system made up of interconnected nodes that functions similarly to nerve cells in the brain.
DeepMind boasts that it can converse, caption photographs, stack blocks with a real robot arm, and even play the Atari video game console from the 1980s.
De Freitas’ remarks came in reaction to an opinion piece on The Next Web that claimed that people alive today will never attain AGI.
“It’s all about scale now!” De Freitas tweeted. “The Game is Over! It’s about making these models bigger, safer, compute efficient, faster…”
He did agree, though, that humanity is still a long way from developing an AI that can pass the Turing test, which measures a machine’s ability to exhibit intellectual behavior that is equal to or indistinguishable from that of a human.
Gato, according to The Next Web, exhibits AGI no better than virtual assistants like Amazon’s Alexa and Apple’s Siri, both of which are already on the market and in people’s homes.
“Gato’s ability to perform multiple tasks is more like a video game console that can store 600 different games, than it’s like a game you can play 600 different ways,” said Tristan Greene of The Next Web.
“It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.”
According to other experts, Gato was built to perform hundreds of tasks, however this ability may impair the quality of each work.
Tiernan Ray, a ZDNet contributor, noted in another opinion post that the agent “is actually not so great on several tasks.”
“On the one hand, the program is able to do better than a dedicated machine learning program at controlling a robotic Sawyer arm that stacks blocks,” Ray said.
“On the other hand, it produces captions for images that in many cases are quite poor.
“Its ability at standard chat dialogue with a human interlocutor is similarly mediocre, sometimes eliciting contradictory and nonsensical utterances.”
Gato, for example, initially stated incorrectly that Marseille is the capital of France.
Gato also wrote a description for a photo that said “man holding up a banana to take a picture of it,” despite the fact that the man was carrying bread.
Gato is described in depth by DeepMind in a new research paper titled ‘A Generalist Agent,’ which was just published on the Arxiv preprint server (pdf given below).
When scaled-up, such an agent, according to the company’s writers, will show ‘significant performance improvement.’
AGI has already been identified as a potential threat that might either intentionally or accidentally wipe humans out.
AGI, according to Dr. Stuart Armstrong of Oxford University’s Future of Humanity Institute, will eventually render humans obsolete and wipe us out.
He believes that machines will be able to work at speeds unimaginable to the human brain and will be able to control the economy, financial markets, transportation, and healthcare without interacting with people.
Because human language is often misconstrued, Dr Armstrong believes that a simple directive to an AGI to ‘prevent human suffering’ could be misinterpreted by a super computer as ‘kill all humans.’
Professor Stephen Hawking told the BBC shortly before his death, “The development of full artificial intelligence could spell the end of the human race.”
DeepMind researchers acknowledged the necessity for a “big red button” in a 2016 paper to prevent a machine from carrying out “a harmful sequence of actions.”
DeepMind was formed in London in 2010 before being acquired by Google in 2014. It is most known for developing an AI software that defeated world champion human professional Go player Lee Sedol in a five-game match in 2016.
The company stated in 2020 that it has solved a 50-year-old biological challenge known as the ‘protein folding problem,’ which is the understanding of how a protein’s amino acid sequence governs its 3D structure.
DeepMind claims to have solved the problem with 92 % accuracy by using 170,000 known protein sequences and their various structures to train a neural network.
Read the research paper given below:
A-Generalist-Agent
Roll over Beethoven. . . I don’t think! AI and robots are fast idiots. They lack the one quality that humans have which they will never acquire – imagination.
Quoting 3rd or 4th Paragraph – – ‘can grasp and learn any intellectual task that any human can, and it can do so without any training’. NOT GOOD ENOUGH sir bc we already have this – – it’s called “standard graduates” from prestigious institutions of higher learning who are inept at ‘everything’ and refuse any training since they’ve learned ‘everything’. shalom,al jenkins