Apple leak claims juicy iPhone 18 Pro camera upgrades that I can’t wait to see in action


We’ve already heard from multiple sources about the possibility of a variable-aperture camera on the iPhone 18 Pro series. The latest rumor not only corroborates those, but also sheds some light on the camera upgrades that might arrive in future iPhone models. 

Renowned Chinese tipster Digital Chat Station (via Weibo) has recently posted a detailed breakdown of the four camera upgrades that the Cupertino giant is working on, and one of them could arrive as soon as the iPhone 18 Pro series this fall. 

Variable aperture camera on the iPhone 18 Pro

Variable aperture is the headline upgrade, and it’s coming to both the iPhone 18 Pro and the iPhone 18 Pro Max. In day-to-day use, variable aperture allows direct control over the amount of light entering the camera’s sensor, and, therefore, the depth of field. 

If the iPhone 18 Pro series actually gets the hardware upgrade, it will allow you to take better low-light pictures (with the lowest aperture value), increase the background blur in portraits, or reduce the amount of light when capturing wide landscapes in broad daylight. 

In simpler terms, the iPhone 18 Pro could get a proper DLSR-like aperture control. Instead of adjusting the aperture via a mechanical ring on cameras, you’d probably end up adjusting the aperture of the iPhone 18 Pro via on-screen controls. 

What else is Apple working on for future iPhone models?

Beyond variable aperture, the tipster also mentioned three additional upgrades that are currently under development. First, the main camera on the iPhone could get a significantly larger 1/1.12-inch sensor, which would add more detail and further improve low-light performance. 

Second, the ultra-wide lens is set to enable optical image stabilization (OIS), which should help capture a wider field of view in the Action mode. Currently, iPhones use extensive EIS with the ultrawide camera in Action mode. 

Meanwhile, the telephoto camera is apparently heading toward 200MP resolution, a jump that would make Apple’s zoom camera one of the most capable on the market (competing with the Find X9 Ultra or the vivo X300 Ultra). Keep in mind that Apple hasn’t confirmed any of these updates or changes yet. 

While I am excited for the variable aperture upgrade to arrive on the iPhone 18 Pro models, the other three rumors sound even more useful. However, even if they are true, they might not arrive on an actual iPhone models until a couple of years. 



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link