Security researchers discovered a flaw in ChatGPT‘s wrapper that might allow thousands of requests to be sent to a website, akin to a DDoS attack. This also might allow attackers to execute DDoS assaults against unsuspecting firms.
OpenAI-owned ChatGPT may have a flaw that allows threat actors to perform distributed denial of service (DDoS) assaults on unwary targets.
According to information supplied by a cybersecurity researcher, OpenAI’s ChatGPT application programming interface (API) contains a vulnerability that can be used to launch a distributed denial of service (DDoS) assault against websites. The chatbot is said to be capable of sending thousands of network requests to a website using the ChatGPT crawler. The researcher believes that the vulnerability, which was assigned a high severity rating, is still active, with no word from the company on when it will be resolved.
German security researcher Benjamin Flesch discovered that the ChatGPT crawler, which OpenAI employs to collect data from the internet to develop ChatGPT, may be tricked into DDoSing arbitrary websites.
ChatGPT crawler can be triggered to DDoS a victim website via HTTP request to an unrelated ChatGPT API,” Flesch stated in a Github project containing a proof-of-concept. “This defect in OpenAI software will spawn a DDoS attack on the victim website, utilizing multiple Microsoft Azure IP address ranges on which ChatGPT crawler is running.”
In a GitHub post published earlier this month, Germany-based security researcher Benjamin Flesch described the vulnerability in the ChatGPT API. The researcher also shared code for a proof-of-concept that makes 50 HTTP requests to a test website, demonstrating how the flaw may be leveraged to launch a DDoS attack.
According to Flesch, the vulnerability is discovered when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions. It is a way for sending data to a server, which is commonly used by API endpoints to create new resources. When calling this function, the ChatGPT API expects a list of hyperlinks in the URL parameter.
Flesch stated that the finding was made in January 2025 and has since been brought to the attention of both OpenAI and Microsoft, neither of which has acknowledged the flaw’s existence.
According to the researcher, an apparent vulnerability in OpenAI’s API is that it does not check whether a hyperlink to the same page appears numerous times in the list. Because hyperlinks to a website might be written in many ways, the crawler makes multiple simultaneous network queries to the same domain. Furthermore, Flesch argues that OpenAI does not impose a limit on the number of hyperlinks that can be added to the URL parameter and transmitted in the same request.
As a result, a malicious actor may send thousands of hits to a website, quickly overwhelming its server. The security researcher assigned this vulnerability a high severity “8.6 CVSS” rating because it is network-based, has low execution complexity, requires no privileges or user interaction, and can have a significant impact on availability.
Flesch claimed to have notified OpenAI and Microsoft (whose servers host the ChatGPT API) about the issue several times via various channels after identifying it in January. He claimed to have reported it to the OpenAI security team, OpenAI workers through reports, the OpenAI data privacy officer, and Microsoft’s security and Azure network operations teams.
Security experts support Flesch’s view. Elad Schulman, founder and CEO of generative AI security firm Lasso Security Inc., told SiliconANGLE via email that “ChatGPT crawlers initiated via chatbots pose significant risks to businesses, including reputational damage, data exploitation, and resource depletion through attacks such as DDoS and denial of wallet.”
“Hackers targeting generative AI chatbots can exploit chatbots to drain a victim’s financial resources, especially in the absence of necessary guardrails,” Schulman pointed out. “By leveraging these techniques, hackers can easily spend a monthly budget of a large language model-based chatbot in just a day.”
Despite several attempts to flag the vulnerability, the researcher claims that it has not been resolved and that the AI firm has not acknowledged its existence.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.