7/16/2023

Headline, July 17 2022/ ''' '' HOMEWORK* SHIRKERS HOMEGROWN '' '''

 

''' '' HOMEWORK* SHIRKERS HOMEGROWN '' '''



REALITY CHECK : SAM DAILY TIMES - THE  WORLD STUDENTS SOCIETY has now accomplished a multiplier of over 10,000. Welcome all to the World Students Society - the eternal ownership of every student in the world.

QUANTUM PHYSICS as Shakespearean sonnet. Trade theory explained by a pirate. A children's story about a space-faring dinosaur. People have had fun asking modern chatbots to produce all sorts of unusual text.

Some requests have been useful in the real world -think travel itineraries, school essays or computer code.

Modern large language models [LLMS] can generate them all, though homework-shirkers should beware : the models may get some facts wrong, and are prone to flights of fancy that their creators call ''hallucinations.''

Occasional hiccups aside, all this represents tremendous progress. Even a few years ago, such programs would have been science fiction.

But churning out writings on demand may not prove to be LLMs' most significant ability.  Their text-generating prowess allows them to act as general-purpose reasoning engines. They can follow instructions, generate plans, and issue commands for other systems to carry out.

AFTER ALL, language is not just words, but '' a representation of the underlying complexity '' of the world, observer Percy Lang, a professor at the Human - Centred  Artificial Intelligence at Stanford University. That means a model of how language works also contains, in some sense, a model of how the world works.

An LLM trained on large amounts of text, says Nathan Benaich of Air Street Capital, an AI investment fund, '' basically learns to reason on the basis of text completion''.

Systems that use LLMs to control other components are proliferating. For example, HuggingGPT, created at Zhejiang University and Microsoft Research, uses ChatGPT as a task planner, farming out user requests to AI models selected from Hugging Face, a library of models trained for text, image and audio tasks.

TaskMatrix.AI, created by researchers at Microsoft, features a chatbot that can interact with music services, e-commerce sites, online games and other online resources.

PALM-E, created by researchers at Google, uses an  ''embodied'' LLM, trained using sensor data as well as text, to control a robot. It can understand and carry out tasks such as ''bring me the rice chips from the drawer'' or ''push the red blocks to the coffee cup''.

Auto-GPT, created by Toran Bruce Richards of  Significant Gravitas, a startup, uses GPT-4 to generate and develop business ideas by knitting together a range of online resources. And so on.

The prospect of connecting LLMS to real-world contraptions has '' the safety people freaking out'', Mr. Benaich says. But making such systems safer is the focus of much research. One hope is that LLMS will have fewer hallucinations if they are trained on datasets combining text, images and video to provide a richer sense of how the world works.

Another approach augments LLMS with formal reasoning capabilities, or with external modules such as task lists and long-term memory.

Observers agree that building systems around LLMS will drive progress for the next few years. '' The field is very much moving in that direction, '' says Oren Etzioni of the Allen Institute for AI.

But in academia researchers are trying to refine and improve LLMS themselves, as well as experimenting with entirely new approaches.

Dr. Liang's team recently developed a model called Alpaca, with a view to making it easier for academic researchers to probe the capabilities and limits of LLMS. This is not always easy with models developed by private firms.

Dr. Liang notes that today's LLMS, which are based on the so-called ''transformer'' architecture  developed by Google, have a limited '' context window '' - akin to short term memory.

Doubling the length of the window increases the computational load fourfold. That limits how fast they can improve. Many researchers are working on post-transformer architectures that can support far bigger context windows - an approach that has been dubbed ''long learning'' [as opposed to ''deep learning''].

Such ''artificial general intelligence'' [AGI] is, for some researchers, a kind of holy grail. Some think AGI is within reach, and can be achieved simply by building ever-bigger LLMS; other, like Dr Le-Cun, disagree.

Dr. Yann LeCun, one of the leading lights of modern AI, has just sounded a sceptical note. In a recent debate at New York University, he argued that LLMS in their current form are ''doomed'' and that efforts to control their output, or prevent them from making factual errors, will fail.

Whether or not they eventually prove a dead end, LLMS have gone much further than anyone might have believed a few years ago, notes Mr. Beniach. However you define AGI, AI researchers seem closer to it than they were a couple of years ago.

The Honour and Serving of the Latest Global Operational Research on A.I. and Future, continues. The World Students Society thanks The Economist.

With respectful dedication to the Global Founder Framers of The World Students Society - the exclusive ownership of every student in the world, and then Scientists, Researchers, Students, Professors and Teachers of the world.

SeeYou all prepare for the Great Global Elections on !WOW! - for every subject : wssciw.blogspot.com and Twitter - !WOW! - the exclusive and eternal ownership of every student in the world : wssciw.blogspot.com and Twitter - !E-WOW! - The Ecosystem 2011 :

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!