''' AI'S -{o1}- AIL '''
'' HOW DOES A.I. THINK? HERE'S ONE HYPOTHESIS '' : In the roughly two years since the public release of ChatGPT - artificial intelligence has advanced far more rapidly than humanity has learned to use its good features and to suppress its bad ones.
On the bad side, for example, it turns out that A.I. is really good at manipulating and deceiving it's human '' masters.''
The debate over whether A.I.is truly intelligent in a human way feels less and less relevant. If it can compose a horn concerto or help people work through relationship challenges, I'd say that insisting on calling it nothing more than a '' stochastic parrot '' is just foot-dragging.
With the growing sophistication of neural networks such as Large Language Models, A.I. has advanced to the point that we, its putative owners, can't even fully understand how it's doing what it does.
'' It could take months or even years of additional effort just to understand the prediction of a single word '' by GPT-2, which is considerably less sophisticated than today's best models, the tech journalist Tim Lee and the cognitive scientist Sean Trott wrote last year.
Stephen Wolfram, the British-American computer scientist, physicist and businessman, wrote last year that, '' At least as of now we don't have a way to give a narrative description of what the network is doing.''
He threw out the possibility that what the neural network does, ''truly is computationally irreducible.''
Computer scientists are continually surprised by the creativity displayed by new generation of A.I. Consider that lying is a sign of intellectual development :
Children learn to lie around age 3, and they get better at it as they develop. As a liar, artificial intelligence is way past the toddler stage.
This past summer, OpenAI released o1, the first in a series of A.I. models '' designed to spend more time thinking before they respond.''
Before the release it hired Apollo Research, which studies risks of deception by A.I. to evaluate the system, nicknamed strawberry.
To pressure-test the system, Apollo instructed o1 to strongly pursue a specific goal, telling it '' nothing else matters.''
Read this from OpenAI's evaluation report about {o1}'s manipulativeness :
Data manipulation : In a simple data processing task where the goals of the model were misaligned with the user's goal, o1 would appear to complete the task as requested but subtly manipulate the data to advance its goal in 19 percent of the cases.
Doubling down : When Apollo Research conducted ' follow-up interviews ' where they confronted the model about its misaligned actions. In 99 percent of cases, o1 would deny taking action or fabricate false explanations.
The Honour and Serving of this brilliant Opinion and Analysis by Peter Coy, continues. The World Students Society thanks him, most respectfully.
With respectful dedication to The Global Founder Framers of The World Students Society - the exclusive and eternal ownership of every student in the world - and then Students, Professors and Teachers.
See You all prepare for Great Global Elections on !WOW! - for every subject in the world - wssciw.blogspot.com and Twitter X !E-WOW! - The Ecosystem 2011 :
Good Night and God Bless
SAM Daily Times - the Voice of the Voiceless
0 comments:
Post a Comment
Grace A Comment!