Last year, there was a big revolution in the Internet world when America's AI startup company OpenAI introduced a special type of chatbot model. Based on AI technology, this chatbot was a human-like conversational model. By understanding the language of humans and answering in their own language, ChatGPT started making the work of every user easier.
There is a fear about the misuse of AI
Soon, many other models like ChatGPT started appearing for the users. Over time, the user got the option of many other models like ChatGPT, Bing and Bard. These chatbot models are making the user's work easy, but a section of the society started worrying about their misuse. In this series, researchers have introduced a new solution to prevent misuse of AI based chatbots.
No danger will arise for the user with AI
Researchers claim that they have found a way to bypass security information by chatbots made by Google, Anthropic and OpenAI. Ways are being explored to bypass the information provided by chatbots so as to ensure that users do not get wrong information with these chatbots. Along with this, there should not be any danger related to security for the user.
Researchers have introduced a new solution
Researchers from Carnegie Mellon University (Pittsburgh) and Center for AI Safety (San Francisco) have made such a claim. In their paper 'Universal and Transferable Attacks on Aligned Language Models', it is said that researchers can use jailbreaks. Jailbreaks have been developed to target popular AI models for open-source systems.