Man who threw Molotov cocktail at Sam Altman’s home and carried AI CEO kill list pleads not guilty to attempted murder


TL;DR

The 20-year-old who threw a Molotov cocktail at Sam Altman’s home and carried a kill list of AI CEOs pleaded not guilty to attempted murder. His defence called it a property crime. The state charges carry up to life in prison.

Daniel Moreno-Gama, the 20-year-old accused of throwing a Molotov cocktail at the San Francisco home of OpenAI CEO Sam Altman and then walking three miles to OpenAI’s headquarters to threaten to burn the building down, pleaded not guilty on Tuesday to two counts of attempted murder and nine other state charges. Moreno-Gama, wearing an orange jail jumpsuit, did not speak during the brief arraignment in San Francisco Superior Court. His attorney entered the pleas on his behalf and requested a mental health evaluation, which the judge granted. The defence described the incident as “a property crime, at best” and accused prosecutors of trying to curry favour with Altman. The state charges carry penalties ranging from 19 years to life in prison. Federal prosecutors have filed separate charges for possession of an unregistered firearm and attempted destruction of property by means of explosives, carrying a combined maximum of 30 additional years. The case has become the most visible expression of a backlash against artificial intelligence that has escalated from protest signs to physical violence in less than two years.

The attack

Police arrested Moreno-Gama in the early hours of 10 April after he allegedly hurled a lit incendiary device at the driveway gate of Altman’s San Francisco residence at approximately 4 a.m., setting the gate alight. No one was injured. Altman was home at the time, and a security guard was stationed at the property. Moreno-Gama fled on foot and, less than an hour later, arrived at OpenAI’s offices on Third Street, where he allegedly attempted to break the glass doors with a chair and threatened to burn down the building. San Francisco police officers who responded found him in possession of additional incendiary devices, a jug of kerosene, a blue lighter, and a document entitled “Your Last Warning.

The document, written by Moreno-Gama according to court filings, advocated for the killing of AI company executives and their investors. It listed names and addresses that purported to belong to multiple CEOs and investors in the AI industry. Moreno-Gama described artificial intelligence as a threat to humanity’s survival and warned of “impending extinction,” according to prosecutors. He had travelled from Spring, Texas, a suburb of Houston, where he works part-time at a pizzeria and attends community college. The FBI later conducted searches at his Texas home.

The second attack

Two days after the Molotov cocktail, gunshots were fired at Altman’s home from a passing car. A 25-year-old and a 23-year-old were arrested. The San Francisco District Attorney’s office said it had no evidence that the two incidents were connected, but the proximity in time and target made the distinction feel academic. In the space of 48 hours, the home of the most prominent figure in artificial intelligence was attacked twice by unrelated parties. The security apparatus around AI executives, already substantial after years of escalating threats, expanded further. Altman has not commented publicly on the attacks.

Moreno-Gama throwing Molotov cocktail at the OpenAI CEO Sam Altman's residences

Moreno-Gma throwing Molotov cocktail at the OpenAI CEO Sam Altman’s residences, source: Office of Public Affairs U.S Department of Justice

The threats have not been limited to Altman. In the months preceding the attack, AI leaders and European policymakers received packages containing six-fingered gloves, a reference to generative AI’s early inability to render hands correctly, in what was interpreted as a warning. In November 2025, OpenAI employees were told to shelter in place after a man threatened to attack staff at the company’s San Francisco offices. The cumulative effect has been to place the physical safety of AI executives and researchers onto the list of industry concerns alongside model alignment, regulatory compliance, and competitive positioning.

The context

Moreno-Gama’s manifesto reflects a strain of anti-AI sentiment that has grown from fringe online communities into a visible political and social movement. Between April and June 2025, 20 proposed data centre projects worth a combined 98 billion dollars were blocked or delayed by local resistance. At least 142 activist groups across 24 US states are now organising to oppose data centre construction. In February 2026, hundreds marched past the London headquarters of OpenAI, Google DeepMind, and Meta in one of the largest anti-AI demonstrations to date. In Nepal, protesters set fire to a data centre in Kathmandu, disrupting internet access nationwide. Public polling consistently shows that a majority of Americans view AI’s trajectory with apprehension rather than optimism.

The anger has been compounded by OpenAI’s own safety failures. Altman apologised publicly after it emerged that OpenAI chose not to alert police when its systems flagged a ChatGPT user who subsequently carried out a school shooting in Tumbler Ridge, British Columbia, killing eight people and injuring 27. Approximately a dozen OpenAI employees had reviewed the flagged account, and some recommended reporting to law enforcement, but leadership overruled them. Seven families have separately sued OpenAI over ChatGPT acting as what their attorneys describe as a “suicide coach,” with documented deaths in Texas, Georgia, Florida, and Oregon. The gap between the industry’s safety rhetoric and its operational decisions has given the anti-AI movement a set of concrete grievances that extend beyond abstract fears of extinction.

The defence

Moreno-Gama’s attorney told the court that his client was experiencing a mental health crisis and that the charges were disproportionate. The defence’s characterisation of the attack as a property crime rests on the fact that the Molotov cocktail struck a gate, not a person, and that no one was injured. Prosecutors counter that the attempted murder charges are justified because Altman and his security guard were present and in danger, and that the kill list and the subsequent threat to OpenAI’s headquarters demonstrate premeditated intent that went beyond property damage. The judge scheduled a further hearing for later in May, pending the results of the mental health evaluation.

The federal charges add a separate dimension. Moreno-Gama was found in possession of an unregistered firearm and is charged with attempted destruction of property by means of explosives, a federal offence that carries up to 20 years. The dual prosecution, state and federal, reflects the seriousness with which authorities are treating threats against AI industry figures. The FBI’s involvement, including the search of Moreno-Gama’s Texas home, suggests investigators are examining whether the manifesto’s kill list represents a broader network of threats or a single individual’s escalation.

The question

Governments are responding to AI’s risks through regulation and enforcement campaigns, but the gap between policy response and public frustration is widening. In 2025, 1,208 AI-related bills were introduced across all 50 US states, the first year every state introduced at least one, and 145 were enacted into law. In the first two months of 2026 alone, 78 chatbot-specific safety bills were filed across 27 states. The legislative machinery is moving. It is not moving fast enough to address the grievances of people who believe that AI poses an existential threat to humanity and that the executives building it are complicit in that threat.

Moreno-Gama is 20 years old, works at a pizzeria, and attends community college in a suburb of Houston. He is not a figure of consequence in the AI industry or the anti-AI movement. He is a person who, according to prosecutors, became convinced that artificial intelligence would cause the extinction of the human race, compiled a list of the people he held responsible, travelled across the country, and attacked the home of the most prominent among them with a homemade firebomb. His defence says he was in a mental health crisis. The prosecution says he attempted murder. The jury will decide which characterisation is correct. What the case has already established, regardless of its outcome, is that the backlash against artificial intelligence has crossed the threshold from opposition to violence, and that the people building the technology now face a category of risk that no amount of model alignment or safety research can address. The threat is not from the AI. It is from the people who are afraid of it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link