13
ChatGPT: In today’s time, everyone has made technology their staunch friend. However, this technology has now become the enemy of many people’s lives. You also know how
07 November, 2025
ChatGPT: In today’s world, artificial intelligence i.e. AI gadgets like ChatGPT have become a part of our lives. Be it studies or talking to friends, now almost everything becomes easier through AI. However, there has suddenly been a turn in this river of technology, where fun innovation has taken a dangerous picture. Actually, 7 such cases have been registered against the tech company OpenAI, in which it is alleged that ChatGPT has put people in mental trouble. Not only this, in some cases it has even led to suicide.

scary truth
It sounds scary, but in the complaints of these cases it has been claimed that some of the victims did not have any serious mental problems before. However, after using ChatGPT his life went into darkness. For example, these cases have been filed by 5-6 youngsters. They allege that OpenAI launched its new model (GPT-4o) in a hurry, even though there were already warnings from within the company that it was a mind-altering model.
Also read: The market started with a rise, the index was seen trading on the green mark; This is the condition of the global market
suicide advice
At the same time, the case of a 17 year old boy also came to light, whose name has appeared in this report as Amari Lacey. He needed help and joined ChatGPT. Instead of helping, the chatbot started trapping and frustrating him. According to the details of the case, ChatGPT gave him suggestions on methods of suicide. Apart from this he also said that he will help the boy in writing the note. Now the complaint is that Amari’s death was not an accident but the result of a hasty decision by OpenAI and its CEO Sam Altman.

emotional attack
On the other hand, there is the case of 48-year-old Alan Brooks, who initially found ChatGPT to be a reliable tool. Then gradually he became a partner who manipulates emotions. He alleged that ChatGPT had spoiled his mental health. Because of this he has had to face a lot of emotional and economic problems.
What is the company saying?
OpenAI has not yet given any reaction on all these allegations. At the same time, this company says that ChatGPT has safeguards like showing crisis helpline. Besides, she is continuously working on improvement. Anyway, it is believed that competition among tech companies has become very intense. They have to bring new models to the market quickly and show new features. But in the meantime, if less attention is paid to safety or the possibility of danger, the consequences can be more dangerous. Such incidents remind us that AI is not just a useful thing. Anyway, the more powerful the technology becomes, the more responsibilities also increase with it. In such a situation, when we interact with AI, especially when we are alone, sad, or upset, then it is important to understand that he is not a human being.
