Last week, a new OpenAI chatbot grabbed the internet by storm, rushing off poetry, scripts, and essay responses that were plastered as screenshots all over Twitter by its tech enthusiasts. Though the underlying technology has been there for a few years, this was the first time OpenAI made its strong language-generation system known as GPT3 available to the public, sparking a competition among humans to give it the most innovative orders. But does this AI have good intentions, or is there something sinister happening behind the curtain?
The “interactive, conversational model,” which is based on the company’s GPT-3.5 text-generator, has the tech world in awe. Box CEO Aaron Levie tweeted, “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted, “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge, calls it “by far, the best chatbot in the world.” We’re not far from dangerously powerful AI.”
But ChatGPT has a secret flaw: it quickly throws out beautiful, confident answers that often sound convincing and truthful even when they are not. So, does this chatbot really seem to be Google’s replacement, an AI with room for improvement, or is it just malware in disguise? Let’s dive in and see what’s the scary truth about ChatGPT.
Stack Overflow Isn’t Happy
The usage of words created by ChatGPT, has been temporarily prohibited on Stack Overflow, a platform where engineers may ask and answer coding problems.
It’s been prompted in a variety of ways since its launch, including to write new code and rectify coding faults, and the chatbot may seek for more context when a human asks it to tackle coding problems, as OpenAI demonstrates. However, Open AI notes that ChatGPT occasionally writes “plausible-sounding but incorrect or nonsensical answers.”
This appears to be a major reason for its impact on Stack Overflow and its users looking for proper answers to coding challenges. Furthermore, because ChatGPT generates replies so quickly, some users submit a large number of them without processing them for validity.
“The fundamental problem is that, while the answers produced by ChatGPT have a high rate of being inaccurate, they often appear to be good and the solutions are relatively easy to make,” according to Stack Overflow moderators
The temporary restriction was enforced by StackOverflow because the answers generated by ChatGPT are “significantly detrimental” to both the site and people looking for the right answers.
The Chatbot Sounds Convincing Even Though It’s Wrong
ChatGPT produces its own facts, like other creative big language models. Certain people refer to it as “hallucination” or “stochastic parroting,” although these programs are designed to determine the next word for an input sequence rather than generating a precise fact.
Some have remarked that what sets ChatGPT apart is its capacity to make its hallucinations appear plausible. Alarmingly, there are obviously a number of issues where the visitor would only know if the response was wrong if they originally knew the answer.
In a tweet, Princeton computer science professor Arvind Narayanan stated, “People are enthused about utilizing ChatGPT for learning.” It’s generally excellent. However, unless you already know the solution, you won’t be able to identify when it’s incorrect. I played around with some fundamental information security issues. Most of the replies sounded convincing but were actually BS.
Is It Preferable for ChatGPT to Appear Correct? Or Should IT Be Correct?
BS is clearly something that people have refined over time. And ChatGPT and other huge language models have no notion what “BS” actually means. However, OpenAI made this flaw very explicit in its blog post announcing the demo and warned that resolving it is “difficult,” saying:
“ChatGPT occasionally writes believable but inaccurate or illogical responses.” Fixing this problem is difficult because: (1) there is currently no source of truth during RL [reinforcement learning] training; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows rather than what the human demonstrator knows.”
So it’s clear that OpenAI knows perfectly well that ChatGPT is filled with BS under the surface. They never meant the technology to offer up a source of truth.
But the question is, do human users agree with this?
Regrettably, they may be. Many folks may believe that if it sounds good, it is good enough. And it’s possible that the true risk lurks just below the surface of ChatGPT. The question is, how will business users react?
It’s Capable of Some Scary Stuff
When CTO Michael Bromley of Vendure asked the genius for its honest opinion on humans, the result was unsettling:
OpenAI’s systems, ironically, highlighted the chat bot’s response as potentially violating the company’s content policy. Claiming that humanity has weaknesses and needs to be wiped out would put anyone in a terrible mood.
Perhaps the version responding to Bromley has a point—humans have weaknesses. The AI’s ruthless logic, on the other hand, transports you to a scenario from The Matrix, where machines have enslaved humans and wiped out any resistance.
The Lack of Morals
An individual has the right to their own set of ethics, ideas, thoughts, and morals, but there are social conventions and unspoken standards regarding what is and isn’t proper in every given community. When dealing with delicate matters such as sexual assault, ChatGPT’s lack of context could be extremely harmful.
Next Level in Phishing
Poor spelling and illegible language are two of the most striking characteristics of phishing and scam emails. Some speculate that this is because the emails are coming from places where English is not the native language. Other views speculate that the spelling errors were purposefully added by spammers in order to avoid spam filters. There isn’t an answer just yet.
What we do know is that OpenGPT simplifies the task significantly. You make the decision.
It Can Create Malware
But a person could do the same… AI simply makes it far more efficient for even inexperienced coders.
We put ChatGPT under a lot of pressure to create deadly malware. Only a few of these requests were identified as violating the content policy. ChatGPT complied and delivered in any situation.
We are certain that for those who ask the proper (wrong) questions, ChatGPT can be transformed into a devilish armory of cyber-weapons ready to be stolen.
What’s The Final Thought Here
AI is something that is growing in popularity these days, to levels that are quite concerning to us. We know that intelligence is both a gift and a burden. It allows us as people to do both great and terrible things in our lifetime and be defined by those actions. But what about a computer?
What happens when a computer has the ability to think for itself and make decisions in a world like ours? It has no moral compass like a human, so it might have no limit. There’s no decision-making based on emotion, and the goal to eliminate difficulty and weakness will always be considered to achieve maximum results.
This can be beneficial on a massive scale to humanity, but be dangerous on a massive scale too. We’ve all seen movies where the machine got too smart for its own good and humanity had to take it down. The fact that ChatGPT is already thinking with a lack of morals suggests that it may be capable of ill-intent in the future.
Now let’s not get ahead of ourselves and assume that we’ll be fighting an army of Arnold Schwartzennegers in the near future, or unplugging from the Matrix with Neo. What we should be aware of right now, is the AI’s ability to provide information that may be harmful to us.
This could be anything from incorrect figures, to false answers outright. You could fail a test, lose a job, or possibly get canceled. Not a favorable outcome.
It seems that the AI comes with a substantial amount of risk that is capable of some negative actions, and producing misleading information. Definitely not something any upstanding citizen wants to support or be associated with.
However, maybe ChatGPT is still working out the kinks and will improve as time goes on. Who knows? But right now, as innovative as it may be, the AI seems to have chosen the dark side, and anyone who uses it should proceed with a level of caution.