6/17/2023

REAL -ARTIFICIAL- MIND : GLOBAL MASTER ESSAY

Some researchers question whether A.I. can be truly intelligent without a body to interact with and learn from the physical world.



Baoyuan Chen, a roboticist at Duke University who is working on developing intelligent robots, pointed out that the human mind - or any other animal mind, for that matter.

Is inextricable from the body's actions in and reactions to the real world. Human babies learn to pick up objects before they learn language. 

'' But some argue that a disembodied mind can't learn limits.''

The artificial intelligent robot's mind, in contrast, was built entirely on language, and it often makes commonsense errors that stem from training procedures. It lacks a deeper connection between the physical and theoretical, Dr. Chen said 

'' I believe that intelligence can't be born without having the perspective of physical embodiments.''

Dr. Bongard, of the University of Vermont, agreed. Over the past few decades, he has developed small robots made of frog cells, called xenobots, that can complete basic tasks and move around their environment.

Although xenobots look much less impressive than chatbots that can write original haikus, they might actually be closer to the kind of intelligence we care about.

'' Slapping a body onto a brain - that's not embodied intelligence,'' Dr. Bongard said. ''It has to push against the world and observe the world pushing back.''

He also believes that attempts to ground artificial intelligence in the physical world are safer than alternative research projects.

Some experts, including Dr. Pirjanian, recently conveyed concern in a letter about the possibility of creating A.I. that could disinterestedly steamroll humans in the pursuit of some goal [ like efficiently producing paper clips], or that could be harnessed for nefarious purposes [ like disinformation campaigns ].

The letter called for a temporary pause in the training of more powerful models.

Dr. Pirjanian noted that his own robot could be seen as a dangerous technology in this regard : '' Imagine if you had a trusted companion robot that feels like part of the family, but is subtly brainwashing you,'' he said.

To prevent this his team of engineers trained another program to monitor Moxie's behaviour and flag or prevent anything potentially harmful or confusing.

But any kind of guardrail to protect against these dangers will be difficult to build into large language models, especially as they grow more powerful. 

While many, like GPT-4, are trained with human feedback, which imbues them with certain limitations, the method can't account for every scenario, so the guardrails can be bypassed.

Dr. Bongard and a number of other scientists in the field thought that the letter calling for a pause in research could bring about uninformed alarmism.

But he is concerned about the dangers of our ever-improving technology and believes that the only way to suffuse embodied A.I. with a robust understanding of its own limitations is to rely on the constant trial and error of moving around in the real world.

Start with simple robots, he said, ''and as they demonstrate that they can do stuff safely, then you let them have more arms, more legs, give them more tools.''

And maybe, with the help of a body, a real artificial mind will emerge.

The World Students Society thanks author Oliver Whang.

0 comments:

Post a Comment

Grace A Comment!