Hannes Stärk, the fourth-year PhD student at CSAIL who built BoltzGen, says the model works because it actually learns—drawing inferences from the data it is trained with and then producing novel ideas inspired by that data. With machine learning, you want the model to generalize from the data you use to train it, says Stärk, who created BoltzGen over seven months, often working up to 12 hours a day. “Because otherwise,” he says, “your solution is already in your training data.” Stärk has also assembled a network of over 30 scientists both within and beyond MIT to explore the design and applications of molecular binders for use in drug development, metabolomics, and structural biology as well as in treating cancer, autoimmune diseases, and genetic diseases. “It’s nice to have one model that can do all of this,” he says. Training across all these areas also makes the model better at generalizing.

Beyond drug discovery

As labs working in drug development continue to reap benefits from AI, other researchers across the Institute are busy applying existing AI tools or, more often, developing their own models for use in myriad disciplines and applications. A cross-­disciplinary group involving the Department of Electrical Engineering and Computer Science (EECS), CSAIL, and Mass General Hospital has launched MultiverSeg, a tool that quickly annotates areas of interest in medical images and could help scientists develop new treatments and map disease progression. MIT researchers are also designing and running AI-directed automated laboratories to accelerate and refine the process of discovering new components for sustainable materials and solar panels. And Ahmed’s MechE group is developing AI models to do such things as help automakers design high-performance vehicles or determine whether a large shipping vessel can be considered seaworthy. Ahmed also teaches a course titled AI and Machine Learning for Engineering Design. First offered in 2021, it attracts not only mechanical, civil, and environmental engineers but students from aero-astro, Sloan, and more. 

Sarah Beery

MIT TECHNOLOGY REVIEW

“The goal is to tap into diverse types of raw data and turn that into “something that helps us understand what is putting species at risk.”

Sara Beery

Meanwhile, Priya Donti, an assistant professor of EECS and a PI at the Laboratory for Information & Decision Systems (LIDS), has developed AI-enabled optimization approaches to help schedule power generation resources on power grids. The machine-learning tools her group builds will help utility operators respond to many inevitable grid issues. “The big challenge is that on a power grid, you need to maintain this exact balance between the amount of power you’re producing and putting into the grid and the amount that you’re taking out on the other side,” she explains. “When you have a lot of variation from solar, wind, and other sources of power whose output varies based on the weather, you have to coordinate the grid much more tightly in order to maintain that balance.” Information about the physics of how power grids work is embedded in Donti’s AI model, so it functions and reacts much as a real grid would.  

MIT researchers are even applying AI tools to explore and analyze the natural world. Sara Beery, an assistant professor of EECS who specializes in AI and decision-­making, develops AI methods that discover and dig into ecological data collected by a wide range of remote sensing technologies to analyze and predict how species and ecosystems are changing around the globe. These technologies enable Beery and her colleagues to gather data on a far greater number of endangered species than ever before, and at an unprecedented scale. Historically, most ecological research has focused on collecting incredibly rich data about single species in really small regions, she says, but “we’ve realized that’s not sufficient.” Information gleaned from, say, a small part of one river ecosystem will not help us understand or prevent what she calls “the exponential increase in species extinction rates that we’re currently facing.” Already, Beery says, “we’re using multimodal AI to enable experts to quickly search massive repositories of image data, to discover data points that were previously very difficult to find.” But she says the goal is to be able to readily tap into diverse types of raw data—from satellite and bioacoustic sensor data to camera images and DNA—and “actually turn that into some sort of scientific insight, something that helps us understand what is putting species at risk.” 

Mens et manus in AI

While some MIT researchers have successfully used AI to help invent technologies ranging from novel cancer therapies to safer high-performance automobiles, others are also using machine learning and other AI tools to help determine whether these technologies perform as promised—or can be produced successfully and economically at scale. Connor Coley, SM ’16, PhD ’19, an associate professor of chemical engineering and EECS, designs new molecules—and recipes for making new molecules, primarily small organic molecules—for potential use by pharmaceutical, agricultural, and other chemical companies. Coley, a former MIT Technology Review Innovators Under 35 honoree, has developed a “genetic” algorithm that uses biologically inspired processes including selection and mutation. This tool encodes potential polymer blends drawn from a large database of polymers into what is effectively a digital chromosome, which the algorithm then improves to generate the most promising material combinations.

Working at the intersection of chemistry and computer science, Coley believes AI could one day help his lab discover polymer blends that would lead to improved battery electrolytes and tailored nanoparticles for safer drug delivery. He and his lab also work to develop machine-learning tools that streamline the discovery and production processes. “If you want AI to be the brain behind some of the science you’re doing, you need the hands as well,” says Coley, who was one of the first MIT faculty members hired into the MIT Schwarzman College of Computing. He and his group have coupled a robotic liquid-handling platform with an optimization algorithm. In the project designed to look for optimal polymer blends, the autonomous system not only chooses which polymer solutions to test but also performs the physical testing. The system, which can generate and test 700 new polymer blends in a day, has identified one that performed 18% better than any of its components.

Systems with a similar level of autonomy could also have a big impact on early-stage drug discovery. One effect, he observes, should be to reduce the time it takes to advance a drug from the lab into clinical trials. But the real question, he says, is “What might we be able to do that we just couldn’t do with any reasonable amount of resources previously?” 

Alexander Siemenn, PhD ’25, also uses AI both to search for new materials and to control robots that test the physical properties of those materials. For his doctoral thesis, Siemenn built from scratch a fully autonomous AI-driven robotic laboratory to discover and test sustainable high-­performance materials for solar panels. The system incorporates computer vision, machine learning, and an optimization algorithm and runs 24 hours a day.  



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link