''WE ARE at the beginning of a broader societal transformation,'' said Brian Christian, a computer scientist and the author of '' The Alignment Problem, '' a book about the ethical concerns surrounding A.I.systems.
''There's going to be a bigger question here for businesses, but in the immediate term, for the education system, what is the future of homework?''
The past few weeks have felt like a honeymoon phase for our relationship with tools powered by artificial intelligence.
Many of us have prodded ChatGPT, a chatbot that can generate responses with startling natural language, with tasks like writing stories about our pets, composing business proposals and coding software programs.
AT THE SAME TIME, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings.
Like smartphones and social networks when they first emerged, A.I. feels fun and exciting. Yet as is always the case with new technology, there will be drawbacks, painful lessons and unintended consequences.
People experimenting with ChatGPT were quick to realize that they could use the tool to win coding contests. Teachers have already caught their students using the bot to plagiarize essays.
And some women who uploaded their photos to Lensa received back renderings that felt sexualized and made them look skinnier, younger and even nude.
We have reached a turning point with artificial intelligence, and now is a good time to pause and assess : How can we use these tools ethically and safely?
For years, virtual assistants like Siri and Alexa, which also use A.I. were the butt of jokes because they weren't particularly helpful. But modern A.I. is just good enough now that many people are seriously contemplating how to fit the tools into their daily lives and occupations.
WITH CAREFUL THOUGHT and consideration, we can take advantage of the smarts of these tools without causing harm to ourselves or others.
UNDERSTAND THE LIMITS : First, it's important to understand how the technology works to know what exactly you're doing with it.
ChatGPT is essentially a more powerful, fancier version of the predictive text system on our phones, which suggests words to complete a sentence when we are typing by using what it has learned from vast amounts of data scraped off the web.
It also can't check if what it's saying is true.
If you use a chatbot to code a program, it looks at how the code was compiled in the past. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, Mr. Christian said.
Likewise, if you're using ChatGPT to write an essay about a classic book, chances are that the bot will construct seemingly plausible arguments.
But if others published a faulty analysis of the book on the web, that may also show up in your essay. If your essay was then posted online, you would be contributing to the spread of misinformation.
The Publishing continues to Part 2 .The World Students Society thanks author Brian X Chen.
0 comments:
Post a Comment
Grace A Comment!