The Rise of AI Pentesting: Exploring the Next Phase of Cybersecurity 



Artificial intelligence is no longer just a lab experiment. It’s quietly becoming part of everyday software, helping developers write code, assisting analysts with research, and powering tools inside banks, hospitals, and tech companies. Over the last few years, large language models (LLMs) have moved from curiosity to core infrastructure for many digital products. 

But while companies rushed to build smarter systems, one important piece lagged behind: security. The way AI systems behave is very different from traditional software, and that difference is forcing the cybersecurity world to rethink how protections actually work. As a result, a new discipline is emerging within the security community: AI penetration testing, often referred to as AI pentesting. 

Why AI Systems Create New Security Risks 

Most software behaves in predictable ways. You give it an input, the code follows a set of rules, and it produces an output. Security testing has always relied on this predictable structure. 

Large language models don’t work that way. 

They interpret language, guess intent, and generate responses based on probabilities rather than strict logic. Sometimes that works brilliantly. Other times, it opens doors that security teams never expected. 

A few of the risks security teams are already studying include: 

  • Prompt injection attacks, where malicious input manipulates the model’s behavior 
  • Data leakage, where hidden training information appears in responses 
  • Model manipulation, where attackers influence AI decisions through crafted prompts 
  • Unsafe API actions, where an AI assistant triggers unintended system commands 

These issues become even more serious when AI systems connect to databases, APIs, or automated workflows. 

When AI Connects to Real Systems, the Stakes Get Higher 

Many modern AI applications don’t operate alone. They often act as the interface for complex systems behind the scenes. Think about a typical AI-powered tool today. You may read corporate documents, access customer databases, launch backend services, or send requests to an external API. Security researchers point out that risk often occurs not in the model itself, but in how the model works with other systems. Even a seemingly harmless prompt may cause the AI Assistant to obtain sensitive information or execute unintended commands. 

The Growing Field of AI Pentesting 

To evaluate these risks, security professionals are adapting traditional penetration testing techniques to AI environments. 

AI pentesting examines how language models behave when exposed to adversarial inputs, unexpected prompts, or manipulated data sources. Instead of probing network ports or software vulnerabilities, testers analyze how AI systems interpret language and how that interpretation affects downstream systems. 

Among the engineers exploring this space is Nayan Goel, a Principal Application Security Engineer whose work focuses on the intersection of AI systems and modern application security. 

Modern research examines what happens when large language models move from controlled environments into real-world software ecosystems. Once AI interacts with APIs, data pipelines, and automated workflows, the number of possible failure points increases quickly. 

Research Is Starting to Catch Up 

For a long time, most work on AI security stayed inside academic circles. Researchers studied theoretical attacks or analyzed how machine-learning systems could be manipulated.  

Goel has contributed to this discussion through research on topics including federated learning for secure AI models, securing AI systems in adversarial environments, and protecting autonomous systems. Some of this work has been presented at international conferences such as IEEE and Springer, reflecting growing recognition of these challenges in both academic and industry settings. 

Building Security Standards for AI Applications 

As more organizations deploy AI tools, the need for common security guidelines is becoming apparent. Organizations such as OWASP have started publishing guidance specifically for generating AI systems and large language models (LLMs). 
 

Organizations such as OWASP have started publishing guidance specifically for generative AI systems and large language models (LLMs). Goel has also contributed to community efforts focused on defining security practices for AI-driven systems, including work connected to OWASP’s agentic security initiatives. 

These guidelines represent an early attempt to bring structure to a field that is evolving quickly. The goal of these projects is to help developers integrate security controls into AI applications before vulnerabilities become widespread. 

Turning Research Into Real Security Tools 

Beyond research frameworks, security teams also need practical ways to test AI systems. 

To help address that gap, Goel’s recent work includes developing and testing methods aimed at identifying vulnerabilities such as prompt injection across AI models, an area that continues to receive attention as generative systems become more widely used. One interesting feature of this tool is its multi-agent testing approach, where different analyzer agents evaluate each other’s behavior during testing. This setup helps mimic coordinated attack strategies that might occur in real-world scenarios. 

A version of this framework was presented at events such as BSides Chicago, where researchers and practitioners share approaches to evaluating the resilience of AI systems in real-world conditions. 

AI Is Also Becoming Part of the Defense 

While AI introduces new security risks, it may also help solve some of them. Security researchers are experimenting with machine-learning systems that monitor behavior patterns, detect suspicious activity, and automate threat detection.  

Teaching Future Security Engineers 

Another important part of the AI security ecosystem is education. Universities are expanding programs that combine cybersecurity with artificial intelligence, but many real-world security problems still aren’t fully covered in traditional courses. 

Activities like these help bridge the gap between academic research and the practical skills engineers need in industry. 

Why AI Pentesting Will Matter More in the Future 

In every major technological transformation, new security challenges have arisen. Web security became indispensable when the Internet spread in the 1990s. When cloud computing expanded, organizations were forced to review infrastructure protection measures. AI seems to be in the same situation today. Large language models are built into everything from in-house tools to customer applications. As their influence grows, so does the importance of carefully testing them. AI pentesting is still a young field, but it’s gaining attention quickly. With new research, security frameworks, and testing tools emerging, the industry is starting to build the foundation needed to secure intelligent systems. 

Digital Trends partners with external contributors. All contributor content is reviewed by the Digital Trends editorial staff.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link