Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?


cybersecurity abstract

ValeryBrozhinsky/iStock/Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Anthropic, OpenAI, and Google tools can automate code debugging.
  • But cybersecurity is too complex a problem for these tools to solve.
  • AI’s biggest contribution may be to reduce avoidable software flaws.

Can you trust the companies that are building AI to make the technology safe for the world to use?

That is one of the most pressing questions you face this year as a user of AI, and it is not an academic question. As real-world deployments of the technology proliferate, novel kinds of risks are emerging with potentially catastrophic impact, demanding fresh solutions. 

Also: 10 ways AI can inflict unprecedented damage in 2026

To the rescue come the major creators of AI models, OpenAI, Anthropic, and Google. All three offer tools that could mitigate failures and security breaches in LLMs and the agentic programs built on top of them. 

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Wall Street observers think there is a real possibility that AI firms’ tools will displace the traditional cybersecurity offerings from companies such as Palo Alto Networks, Zscaler, and Check Point Software. A related field, called observability, is also threatened, including firms such as Dynatrace that sell tools to detect system failures. 

Also: Why encrypted backups may fail in an AI-driven ransomware era

The notion that most or all of the world’s software problems will be solved by software creators at the source, before programs enter the wild, is indeed tantalizing. No more denials of service, no more ransomware, no more supply chain attacks if you get it right from the start.

Only, it’s not that simple.

The challenge is greater than the potential achievements of any tool or approach. The risks of software, including AI models and agents, are too broad in scope for those companies to resolve on their own. 

It will take all of the traditional security and observability tools to fix what ails AI. It will also take novel forms of data engineering. In fact, the solution may even require the fundamental redesign of AI programs themselves to address the root causes of risk. 

Could AI make cybersecurity obsolete?

anthropic-claude-code-security-graphic-2026

Anthropic

The stocks of cybersecurity firms were shaken recently when Anthropic unveiled Claude Code Security, an extension of its popular Claude Code tool that can automate some code writing. 

Anthropic said Claude Code Security will allow “teams to find and fix security issues that traditional methods often miss,” with a dashboard that displays potential issues and proposes patches to address the issues. 

Also: AI threats will get worse: 6 ways to match the tenacity of your digital adversaries

The intent is that a human analyst reviews the findings and proposals to make the final decision. Claude Code Security is “available in a limited research preview.”

anthropic-2026-claude-code-security-terminal.png

A terminal session with Anthropic’s Claude Code Security.

Anthropic

The result of over a year of cybersecurity research, Claude Code Security does not merely police code made with Claude Code. Anthropic has used the tool to find hundreds of vulnerabilities “that had gone undetected for decades, despite years of expert review.”

anthropic-2026-claude-code-security-proposes.png

Anthropic

Likewise, OpenAI in October unveiled Aardvark, what the firm calls an “agentic security researcher powered by GPT‑5.” In private beta at the moment, Aardvark undertakes the same kind of automatic code scanning as that promised by Anthropic. “Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes,” said OpenAI.

openai-2026-aardvark-info-graphic

How OpenAI’s Aardvark works. 

Anthropic

Three weeks before Aardvark’s launch, Google’s DeepMind research unit unveiled CodeMender, which the firm called “a new AI-powered agent that improves code security automatically.”

Like Anthropic’s tool, CodeMender is meant not simply to secure Google creations but to be a broad security tool. In six months of development, DeepMind noted, CodeMender had “already upstreamed 72 security fixes to open-source projects, including some as large as 4.5 million lines of code.”

Unlike Anthropic and OpenAI, DeepMind emphasizes not only proposing fixes but also automatically applying fixes to code. So far, the program is only being used by DeepMind researchers. DeepMind emphasized that “Currently, all patches generated by CodeMender are reviewed by human researchers before they’re submitted upstream.”

deepmind-2026-codemender-info-graphic

How Google DeepMind’s CodeMender works.

DeepMind

All three offerings, most observers agree, immediately threaten the role of tools in categories such as ‘AppSec,’ ‘Software Composition Analysis,’ and ‘Static Application Security Testing.’ That capability covers companies and tools such as Snyk, Jfrog, Mend, GitHub Dependabot, Semgrep, Sonatype, Checkmarx, and Veracode.

Claude Code Security’s introduction “drove renewed weakness across high-growth software names, particularly in observability and cloud security,” wrote William Power, a software analyst with investment firm R.W. Baird & Co.

Also: Why enterprise AI agents could become the ultimate insider threat

It’s reasonable to assume that, as Anthropic, OpenAI, and DeepMind emphasize, you will probably want to work with tools that are coming from the same vendors who are building the code that is proliferating the LLM-based software that will increasingly displace traditional packaged applications. 

The technology has the added appeal that it’s integrated into these companies’ coding platforms. Claude Code Security and Aardvark are already integrated, in preview form, into the Claude Code and OpenAI Codex tools. While CodeMender is still a research project, it’s clear that at some point it could be part of Google’s AI Studio development tool for Gemini, Imagen, and its other models.

A problem bigger than a single tool

However useful those tools prove themselves, cybersecurity is too broad a field, and the problem is too great in scope and too profound in its root causes, for code-scanning tools to make AI outputs safe. 

Within the realm of scanning source code, analyzing issues, and patching or redesigning, the problem is larger than a single piece of source code. Modern software is known in the field as an “artifact,” a composition of numerous files from many sources. A given program includes libraries, frameworks, and other elements that must all perform reliably together.

In a recent blog post, JFrog’s CTO and co-founder, Yoav Landman, explained that, “Code is no longer the final product. It is an intermediate step. The real output — the thing that gets shipped, deployed, and executed — is a binary artifact: A container image. A package. A library. A compiled release.” 

Also: Rolling out AI? 5 security tactics your business can’t get wrong – and why

Within the broader realm of technology, scanning and fixing code is a small portion of what cybersecurity firms, such as Palo Alto, Zscaler, and Check Point, do, or what Dynatrace, Splunk, and Datadog do in observability. 

Firewalls exist at a more basic level than as an application that secures the perimeter of a computer network. Their role is to keep out bad actors before they can get near vulnerable code. So-called endpoint security tools similarly ensure that compromised host computers do not become launch pads for attack. Meanwhile, a “Secure Access Service Edge” tool is cloud-based software that identifies and authenticates users on a network so that only the right parties interact with programs. 

None of those issues is resolved by having less buggy source code. Tools such as “Security Information and Event Management” (SIEM) sit above the network and the apps. These tools tell a security professional what is happening across a computer fleet in real time. 

While it is nice to fix code before it ships, SIEM does things that scanning code will never do. The tool shows things as they develop that demand urgent attention because they’re already causing issues. If the code is buggy, it can wait, and probably should wait. When something potentially catastrophic is happening across an entire computer network, time is of the essence. 

Also: AI is quietly poisoning itself and pushing models toward collapse – but there’s a cure

The companies selling SIEM, such as Palo Alto and Zscaler, are employing AI to speed up the work that security professionals do. However, software won’t replace the “throat to choke” when things are going wrong. Security vendors exist because they have people who pick up the phone in the middle of the night and work against the clock to find and fix issues that are larger than a single piece of bad code. 

Anthropic and OpenAI are not generally known for picking up the phone, although Google’s Cloud unit can offer its own security operations as an additional hand. 

AI, heal thyself 

On a more profound level, recent research has shown that the frontier of AI, the agentic systems, are themselves plagued with potentially catastrophic engineering and design faults. 

Researchers at MIT last week explained that numerous commercially shipping AI agent systems lack such basic features as published security audits or a means to shut down rogue agents. 

Also: AI agents are fast, loose, and out of control, MIT study finds

Researchers led by Northeastern University recently revealed the results of extensive red-team efforts where multiple AI agents interoperate, mostly without a person in the loop. 

They found “chaos” ensued: bots trying to shut down other bots; bots that “shared” malicious code with one another to expand the “threat surface” of cyber risk; and bots that mutually reinforced bad security practices.

One way to deal with that chaos is to build new AI training data sets gathered in the wild. Software and services firm Innodata is one vendor helping the giants of AI to do that.

“The adversaries are extremely creative, and they’re coming up with things which the models that have been trained in lab environments have never seen before,” Jack Abuhoff, Innodata’s CEO, told ZDNET. “What do you do about that? You need high-quality, semantically diverse, scalable adversarial attacks with which to stress-test the agents.”

Also: Destroyed servers and DoS attacks: What can happen when OpenClaw AI agents interact

Because AI and agents have their own faults, one stock analyst at Barclays Bank who covers the cybersecurity vendors, Saket Kalia, mused recently, “If the code developer is offering the code security tool, is that like the fox guarding the hen house?” 

Using AI to improve code

AI will inevitably be used to help fix code. The biggest contribution that Claude Code Security, Aardvark, and CodeMender can offer is not to magically solve cybersecurity, but to reduce the incredible number of avoidable software failures. 

In an article in the November issue of the scholarly journal IEEE Spectrum, titled “Trillions spent and big software projects are still failing,” long-time software chronicler Robert N. Charette pointed out that $5.6 trillion is spent annually on IT, but “software success rates have not markedly improved in the past two decades.”

Even for AI, it’s a grand challenge. As Charette wrote, “there are hard limits on what AI can bring to the table” to solve software engineering. “As software practitioners know, IT projects suffer from enough management hallucinations and delusions without AI adding to them.”





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Spotify aims to provide a consistent listening experience that uses minimal data. As a result, your audio quality might be less than ideal, especially if you’re using a pair of high-fidelity headphones or high-end speakers. Here’s how to fix that.

Switch audio streaming quality to Very High or Lossless

The default audio streaming quality in both the mobile and desktop Spotify apps is set to Automatic, which usually keeps the audio quality at Normal, which is only 96 Kbps. Even though Spotify uses the Ogg Vorbis codec, which is superior to MP3, OGG files exhibit slight (but noticeable) digital noise, poor bass detail, dull treble, and a narrow soundstage at 96 Kbps.

Even worse, Spotify is aggressive about adjusting the automatic bitrate. Even though 4G is more than fast enough to stream high-quality OGG files, even with a weak signal, Spotify may still drop the quality to Low, which has a bitrate of just 24 Kb/s. You will notice such a sharp drop in quality, even on a pair of bottom-of-the-barrel headphones.

To rectify this, open the Spotify app, tap your user image, open “Settings and privacy,” and tap the “Media Quality” menu. Once there, set Wi-Fi streaming quality and cellular streaming quality to “Very high” or “Lossless.”

I recommend setting cellular streaming quality to Very high and reserving Lossless for Wi-Fi, since lossless streaming is very data-intensive. One hour of streaming lossless files can take up to 1GB of data, as well as a good chunk of your phone’s storage, because Spotify caches files you’re frequently streaming. Besides, you’ll struggle to notice the difference unless you’re listening to music on a wired pair of high-end headphones or speakers; wireless connection just doesn’t have the bandwidth needed to convey the full fidelity of Spotify lossless audio.

You might opt for High quality if you have a capped data plan, but I recommend doing so only if you stream hours upon hours’ worth of music every single day over a cellular network. For instance, I burn through about 8 GB of data per month on average while streaming about two hours of very high-quality music over a cellular network each day.

Illustration of a headphone with various music icons around.


How Audio Compression Works and Why It Can Affect Your Music Quality

Feeling the squeeze when listening to your favorite song?

Set audio download quality to Very high or Lossless

If you tend to download songs and albums for offline listening, you should also set the audio download quality to “Very high” or “Lossless.” This setting is located just under the audio streaming quality section.

The audio download quality menu in Spotify's mobile app.

If you’ve got enough free storage on your phone, opt for the latter, but if you’d rather save storage space, set it to Very high. You’ll hardly hear the difference, but lossless files are about five times larger than the 320 Kb/s OGG files Spotify offers at its Very high quality setting, and they can quickly fill up your phone’s storage.

Adjust video streaming quality at your discretion

The last section of the Media quality menu is Video streaming quality. This sets the quality of video podcasts and music videos available for certain songs. Since I care about neither, I set it to “Very high” on Wi-Fi and “Normal” on cellular, but you should tweak the two options at your discretion because songs sound notably better at higher video streaming quality levels.

If you often watch videos over cellular and have unlimited data, feel free to toggle video quality to very high.

Make sure Data Saver mode is disabled

Even if your audio quality is set to Very high or Lossless, Spotify will switch to low-quality streaming if the app’s Data saver mode is enabled. This option is located in the Data saving and offline menu. Open the menu, then set it to “Always off,” or choose “Automatic” to have Spotify’s Data Saver mode kick in alongside your phone’s Data Saver mode.

You can also enable volume normalization and play around with the built-in equalizer

Spotify logo in the center of the screen with an equalizer in front. Credit: Lucas Gouveia / How-To Geek

Last but not least, there are two additional features you can play with to improve your listening experience. The first is volume normalization, which sets the same loudness for every track you’re listening to. This can be handy because different albums are mastered at different loudness levels, with newer music usually being louder.

Since I’m an album-oriented listener, I keep the option disabled. I can just play an album and set the audio volume accordingly, and I don’t really mind louder songs when listening to playlists, artists, or song radios.

But if you can’t stand one song being quiet and the next rattling the windows, visit the Playback menu, enable “Volume normalization,” and set it to “Quiet” or “Normal.” The “Loud” option can digitally compress files, and neither Spotify nor I recommend using it. This also happens with “Quiet” and “Normal,” since both adjust the decibel level of the master recording for each song, but the compression level is much lower and extremely hard to notice.

Before I end this, I should also mention that you can access the equalizer directly from the Spotify app, where you can fine-tune your music listening experience or pick one of the available equalizer presets. If your phone has a built-in equalizer, Spotify will open it; if it doesn’t, you can use Spotify’s. On my phone (a Samsung Galaxy S21 FE), I can only use One UI’s built-in equalizer.

To open the equalizer, open “Playback,” then hit the “Equalizer” button. Now you can equalize your audio to your heart’s content.


Adjusting just a few settings can have a drastic impact on your Spotify listening experience. If you aren’t satisfied with Spotify’s sound quality, make sure to adjust the audio before jumping ship. You should also check the sound quality settings from time to time, as Spotify can reset them during app updates.​​​​​​​

Three phones with a Spotify screen and the logo in the center.


These 8 Spotify Features Are My Favorite Hidden Gems

Look for these now.



Source link