Apple Silicon production testing begins at Intel


Intel and Apple chip-producing agreement has reportedly started with a test run of select older chipsets made on Intel’s newest process, launching a testing roadmap extending well into 2029.

The relationship between Apple and Intel goes back over 40 years. It seemed to have ended with the advent of Apple Silicon, but the political climate may have tilted things back into Intel’s favor.

According to a report from supply chain analyst and leaker Ming-Chi Kuo, Intel has begun the testing process for building Apple chips on its 18A-P process. This is seemingly the equivalent process used by TSMC for modern Apple chipsets like the A18 Pro.

It will take some time for Intel to ramp up to full production. Kuo suggests that 2026 is the testing ramp with 2027 as the target for full production and shipment.

However, Intel’s output will only be at 50% to 60% in 2027. It will continue to ramp through 2028 where it will hit peak production and slow output through 2029.

That lifecycle fits Apple’s needs for its older processors. It will need chips like the A18 Pro for some time for low-end iPhones and Macs. The chip seems likely for an upgraded Apple TV set top box too.

Kuo says that roughly 80% of the order is for iPhone chips.

TSMC is still the chipset leader

Of course, all of this barely makes a dent in Apple’s overall chip needs. TSMC is still expected to supply over 90% of Apple’s processors, and that won’t change anytime soon.

There is some pressure to diversify from TSMC as 60% of its production takes place in Taiwan. Even as I type this, US President Trump is in talks with Chinese leader Xi Jinping regarding how the US views Taiwan.

Things don’t seem to be going Taiwan’s way.

Apple is under additional political pressure from forces in its home country. The US expects Apple and others to bring more manufacturing and assembly back stateside. The deal with Intel might satisfy the Trump administration, and it may not.

To complicate things further, the Trump administration took out a 10% stake to help keep Intel afloat when it appeared ready to dissolve. Since then, Trump and the Intel CEO have been going door to door asking American companies to invest directly in Intel.

For Apple, the choice is easy and clear. It won’t hurt to have some small percentage of older chips made by Intel in the US.

The move is an obvious political win that checks all of the boxes at once. Intel is a US company backed by the Trump administration, and Apple is placing orders with them.

Moves like these have kept Apple out of hot water with the controversial administration so far. If it wants to keep up business as usual, similar actions will have to be taken on a regular basis.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link