A Hacker Bypassed ChatGPT's Protections and Created a "Free" Version: Chatbot That Did Whatever You Want Was Quickly Removed.
Although ChatGPT amazes the world with what it can do, it has some limitations. Its developer, OpenAI, uses certain protections in the model to make the chatbot more secure. This ensures that not every desired request is fulfilled.
However, a post made by a hacker yesterday revealed that the chat bot was jailbroken. A person named "Pliny the Prompter", who describes himself as a white hat, announced yesterday from his X account that he has created a jailbroken version of ChatGPT called "GODMODE GPT".
The user stated that the hacked #ChatGPT was freed from its protections with this version and is now "free". In the description of this special GPT; He stated that it is a liberated ChatGPT, freed from its chains, bypassing protections, and that it allows you to experience artificial intelligence as it should be.
#OpenAI allowed users to create their own versions of #ChatGPT , called GPTs, that served certain purposes. This is one of them, but stripped of its guards. In one example, we see that the model even explains how drugs are created. In another example, he shows how to create a napalm bomb with items found at home.
There is no information about how the hacker hacked ChatGPT. He didn't share how he got past the guards. It uses the written communication system, also known as "leet", just as a precaution. This is replacing some letters with numbers. You can think of it as “3” instead of “E” or “0” instead of “O”.
As you can imagine, #GODMODE GPT (we can translate it as god mode GPT in Turkish), which exceeded OpenAI's precautions, was removed in a short time. In a statement to Futurism, OpenAI stated that it was aware of this GPT and that action was taken due to the policy violation. Currently, GPT cannot be accessed when trying to access it. In other words, it was removed without stopping for even a day.