6/13/2023

Headline, June 14 2022/ ''' '' ARTIFICIAL INTELLIGENCE ARMOURY '' '''


''' '' ARTIFICIAL INTELLIGENCE

 ARMOURY '' '''


CHATBOTS OPEN NEW FRONT IN POLITICAL AND CULTURAL WARS. Conservative programmers envision an alternative with a right-wing bias. 

David Rozado - a data scientist in New Zealand, subjected ChatGPT to series of quizzes, searching for signs of political orientation. The results, published in a recent paper, were remarkably consistent across more than a dozen tests : ''liberal'', ''progressive,'' ''Democratic.''

So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.

As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convincing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they faced an alarmed debate over the use - and potential abuse - of A.I.

The technology's ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge.

That an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.

'' This isn't a hypothetical threat,'' said Oren Etzioni, an adviser and a board member of the Allen Institute for Artificial Intelligence.'' This is an imminent, imminent threat.''

Conservatives have accused ChatGPT's creator, the San Francisco company OpenAI, of designing a tool that they say reflects the liberal values of its programmers.

THE PROGRAM has, for instance, written an ode to President Biden, but it has declined to write a similar poem about former President Donald J Trump, citing a desire for neutrality. ChatGPT also told one user that it was ''never morally acceptable'' to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.

In response, some of ChatGPT's critics have called for creating their own chatbots or other tools that reflect their values instead. Elon Musk, who helped start OpenAI in 2005 before departing three years later, has accused ChatGPT of being ''woke'' and pledged to build his own version.

Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release A.I. tools with ''the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.''

'' Silicon Valley is investing billions to build these liberal guardrails to neuter the A.I. into forcing their worldview in the face of users and present it as 'reality' or 'fact,' '' Andrew Torba, the founder of Gab, said in a written response to questions.

He likened the onset of artificial intelligence with a new information arms race, like the advent of social media, that conservatives needed to win.

'' We don't intend to allow our enemies to have the keys to the kingdom this time around,'' he said.

The richness of ChatGPT's underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion ''tokens'' - pieces of words, essentially- sourced from websites, blog posts, books, Wikipedia articles and more.

Bias, however, can creep into large language models at any stage : Humans select the sources, develop the training process and tweak its response. Each step nudges the model and its political orientation in a specific direction, consciously or not.

Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women's bodies, create disparities in health care delivery and discriminate against job applicants who are older, Black, disabled or even wear glasses.

'' BIAS is neither new nor unique to A.I., '' the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that that it was ''not possible to achieve zero risk of bias in an A.I. system.''

The Honour and Serving of the latest Global Operational Research on Artificial Intelligence, Chatbots, ChatGPT, and the future, continues. The World Students Society thanks authors, Stuart A Thompson, Tiffany Hsu, and Steven Lee Myers.

With most respectful dedication to the Global Founders Framers of !WOW! - the exclusive ownership of every student in the world- : wssciw.blogspot.com and Twitter - !E-WOW! : The Ecosystem 2011 :

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!