Hannes Stärk, the fourth-year PhD student at CSAIL who built BoltzGen, says the model works because it actually learns—drawing inferences from the data it is trained with and then producing novel ideas inspired by that data. With machine learning, you want the model to generalize from the data you use to train it, says Stärk, who created BoltzGen over seven months, often working up to 12 hours a day. “Because otherwise,” he says, “your solution is already in your training data.” Stärk has also assembled a network of over 30 scientists both within and beyond MIT to explore the design and applications of molecular binders for use in drug development, metabolomics, and structural biology as well as in treating cancer, autoimmune diseases, and genetic diseases. “It’s nice to have one model that can do all of this,” he says. Training across all these areas also makes the model better at generalizing.

Beyond drug discovery

As labs working in drug development continue to reap benefits from AI, other researchers across the Institute are busy applying existing AI tools or, more often, developing their own models for use in myriad disciplines and applications. A cross-­disciplinary group involving the Department of Electrical Engineering and Computer Science (EECS), CSAIL, and Mass General Hospital has launched MultiverSeg, a tool that quickly annotates areas of interest in medical images and could help scientists develop new treatments and map disease progression. MIT researchers are also designing and running AI-directed automated laboratories to accelerate and refine the process of discovering new components for sustainable materials and solar panels. And Ahmed’s MechE group is developing AI models to do such things as help automakers design high-performance vehicles or determine whether a large shipping vessel can be considered seaworthy. Ahmed also teaches a course titled AI and Machine Learning for Engineering Design. First offered in 2021, it attracts not only mechanical, civil, and environmental engineers but students from aero-astro, Sloan, and more. 

Sarah Beery

MIT TECHNOLOGY REVIEW

“The goal is to tap into diverse types of raw data and turn that into “something that helps us understand what is putting species at risk.”

Sara Beery

Meanwhile, Priya Donti, an assistant professor of EECS and a PI at the Laboratory for Information & Decision Systems (LIDS), has developed AI-enabled optimization approaches to help schedule power generation resources on power grids. The machine-learning tools her group builds will help utility operators respond to many inevitable grid issues. “The big challenge is that on a power grid, you need to maintain this exact balance between the amount of power you’re producing and putting into the grid and the amount that you’re taking out on the other side,” she explains. “When you have a lot of variation from solar, wind, and other sources of power whose output varies based on the weather, you have to coordinate the grid much more tightly in order to maintain that balance.” Information about the physics of how power grids work is embedded in Donti’s AI model, so it functions and reacts much as a real grid would.  

MIT researchers are even applying AI tools to explore and analyze the natural world. Sara Beery, an assistant professor of EECS who specializes in AI and decision-­making, develops AI methods that discover and dig into ecological data collected by a wide range of remote sensing technologies to analyze and predict how species and ecosystems are changing around the globe. These technologies enable Beery and her colleagues to gather data on a far greater number of endangered species than ever before, and at an unprecedented scale. Historically, most ecological research has focused on collecting incredibly rich data about single species in really small regions, she says, but “we’ve realized that’s not sufficient.” Information gleaned from, say, a small part of one river ecosystem will not help us understand or prevent what she calls “the exponential increase in species extinction rates that we’re currently facing.” Already, Beery says, “we’re using multimodal AI to enable experts to quickly search massive repositories of image data, to discover data points that were previously very difficult to find.” But she says the goal is to be able to readily tap into diverse types of raw data—from satellite and bioacoustic sensor data to camera images and DNA—and “actually turn that into some sort of scientific insight, something that helps us understand what is putting species at risk.” 

Mens et manus in AI

While some MIT researchers have successfully used AI to help invent technologies ranging from novel cancer therapies to safer high-performance automobiles, others are also using machine learning and other AI tools to help determine whether these technologies perform as promised—or can be produced successfully and economically at scale. Connor Coley, SM ’16, PhD ’19, an associate professor of chemical engineering and EECS, designs new molecules—and recipes for making new molecules, primarily small organic molecules—for potential use by pharmaceutical, agricultural, and other chemical companies. Coley, a former MIT Technology Review Innovators Under 35 honoree, has developed a “genetic” algorithm that uses biologically inspired processes including selection and mutation. This tool encodes potential polymer blends drawn from a large database of polymers into what is effectively a digital chromosome, which the algorithm then improves to generate the most promising material combinations.

Working at the intersection of chemistry and computer science, Coley believes AI could one day help his lab discover polymer blends that would lead to improved battery electrolytes and tailored nanoparticles for safer drug delivery. He and his lab also work to develop machine-learning tools that streamline the discovery and production processes. “If you want AI to be the brain behind some of the science you’re doing, you need the hands as well,” says Coley, who was one of the first MIT faculty members hired into the MIT Schwarzman College of Computing. He and his group have coupled a robotic liquid-handling platform with an optimization algorithm. In the project designed to look for optimal polymer blends, the autonomous system not only chooses which polymer solutions to test but also performs the physical testing. The system, which can generate and test 700 new polymer blends in a day, has identified one that performed 18% better than any of its components.

Systems with a similar level of autonomy could also have a big impact on early-stage drug discovery. One effect, he observes, should be to reduce the time it takes to advance a drug from the lab into clinical trials. But the real question, he says, is “What might we be able to do that we just couldn’t do with any reasonable amount of resources previously?” 

Alexander Siemenn, PhD ’25, also uses AI both to search for new materials and to control robots that test the physical properties of those materials. For his doctoral thesis, Siemenn built from scratch a fully autonomous AI-driven robotic laboratory to discover and test sustainable high-­performance materials for solar panels. The system incorporates computer vision, machine learning, and an optimization algorithm and runs 24 hours a day.  



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link