Intel has already started making chips for Apple, it seems, but not the most advanced kind


The Apple-Intel chip deal that everyone said would never happen is apparently happening. And with some important caveats that the breathless headlines have largely glossed over.

Ming Chi Kuo suggests Apple has kicked off production of processors for lower-end iPhones, iPads, and Macs at Intel, running on its 18A-P process node with Foveros packaging. These are not the A-series chips powering the iPhone Pro or the M-series silicon inside a MacBook Pro. This is the legacy and mid-range stuff — the workhorses that ship in enormous volume but carry less prestige. The order mix is roughly 80% iPhone, which closely matches Apple’s device sales breakdown. That detail matters more than it might first appear.

This is really about TSMC

What Apple is doing here isn’t really about Intel. It’s about TSMC. For years, TSMC has been the single pipe through which virtually all of Apple’s silicon flows, and that pipe is getting increasingly crowded. AI and high-performance computing have become TSMC’s most lucrative customers, and advanced-node capacity — the foundry space where the most complex, most profitable chips are made — keeps tilting in that direction. Apple can see the writing on the wall. The company that once had TSMC’s undivided attention now has to share it with Nvidia, AMD, and a growing list of hyperscalers designing their own accelerators. Apple’s leverage is quietly eroding.

So Apple is doing what Apple does: planning several moves ahead. The Intel engagement reportedly began well before TSMC’s capacity crunch became acute, indicating it’s a methodically constructed hedge. By running three product lines at Intel simultaneously and allocating wafers that mirror its actual device mix, Apple isn’t just testing the waters. It’s essentially rehearsing what a full-scale Intel supply relationship would look like, stress-testing the collaboration across yield optimization, design feedback loops, and production adjustments. If Intel passes, Apple has a credible second source. If Intel stumbles, Apple has spent relatively little on the experiment.

A lifeline or a pressure cooker — Intel has to decide

For Intel, this is either a lifeline or a pressure cooker, depending on how you look at it. The strategic significance of landing Apple — even for mid-range chips — is hard to overstate. Apple’s manufacturing demands are notoriously exacting, its volumes are massive, and its products span enough of the market to give Intel’s foundry business something it desperately needs: a real, complex, high-stakes workout. The plan, as it stands, is small-scale testing through 2026, a ramp in 2027, continued growth into 2028, and then a natural decline in 2029 as the 18A-P generation ages out.

The catch is that Intel’s 2027 yield targets are set at just 50-60%. That is a starting point, not an achievement. And crucially, TSMC will still hold over 90% of Apple’s supply share even if everything goes smoothly. This is not Intel’s comeback story — not yet. Assemblers and supply chain partners have reportedly seen no shipment schedules, and sentiment within Intel about the Apple orders is described as mixed, which is a polite way of saying that not everyone inside the company is sure this partnership is a net positive given the pressure it will bring.

TSMC, meanwhile, sits in an unusual position: watching all of this from a position of strength while being structurally unable to do much about it. Its execution remains industry-leading, and for the next several years, the vast majority of advanced-node orders will stay exactly where they are. But the long-term picture is one where every major player in the ecosystem — governments, Apple, Samsung — is actively building alternatives or applying pressure. TSMC’s moat is real, but it’s being mapped with increasing precision by people who very much want to find a way across it. The story of Intel making Apple chips is a good one. The more interesting story is what it reveals about where the industry is quietly, deliberately heading.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link