OpenAI’s new $100 ChatGPT Pro plan targets Claude Max with five times the Codex access


In short: OpenAI launched a new $100 per month Pro plan for ChatGPT on 9 April 2026, inserting a new tier between the existing $20 Plus plan and the $200 Pro plan and directly targeting Anthropic’s Claude Max, which is also priced at $100 per month. The new plan offers five times more Codex usage than Plus, access to the same model suite as the $200 tier, and a launch promotion that temporarily doubles that advantage: through 31 May 2026, subscribers get ten times the Codex usage of Plus. The move follows Codex crossing three million weekly users on 8 April, a growth rate the company describes as a 5x increase in three months.

What the $100 plan includes, and where it sits in ChatGPT’s pricing structure

The new plan is the sixth pricing tier in ChatGPT’s current structure, which now runs from a free account with advertising, through a $8 per month Go plan, the $20 per month Plus plan, to two versions of Pro at $100 and $200 per month, a $25 per user per month Business plan, and custom-priced Enterprise contracts. The $100 Pro plan sits directly between Plus and the existing $200 Pro tier, offering five times the Codex usage of Plus and targeting what OpenAI describes as “longer, high-effort Codex sessions” that Plus subscribers hit the ceiling on. The $200 Pro plan, by comparison, provides 20 times the Codex usage of Plus, making it four times more Codex-intensive than the new $100 tier.

Despite the difference in usage limits, both Pro tiers give access to the same model suite: the exclusive GPT-5.4 Pro model, unlimited use of GPT-5.4 Instant and GPT-5.4 Thinking, and all other features available on the $200 plan. The differentiation between the two tiers is usage volume, not capability. As a launch promotion, subscribers to the new $100 plan will receive ten times the Codex usage of Plus through 31 May 2026; after that date, the standard five times limit applies. OpenAI also announced a rebalancing of the Plus plan’s Codex allocation alongside the new tier, shifting Plus towards steadier day-to-day usage rather than allowing the longer burst sessions that the $100 plan is intended to serve.

Codex demand: the numbers that prompted the new tier

On 8 April 2026, the day before the $100 plan was announced, Sam Altman posted on X that OpenAI was resetting Codex’s usage limits across all plans “to celebrate 3M weekly codex users,” and committed to repeating the reset for every additional million users until Codex reaches ten million weekly users. Thibault Sottiaux, who leads the Codex product, stated: “Three million people are now using Codex weekly, up from two million a little under a month ago.” OpenAI described the growth trajectory as a 5x increase in the preceding three months, with 70% month-over-month user growth.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The scale of that growth reflects a shift in how developers are using AI coding tools. OpenAI rolled out a dedicated Codex app for macOS in February 2026, designed to move beyond line-by-line code generation into what the company called agentic, multi-task coding workflows: orchestrating multiple agents in parallel, running background jobs, and handling instructions that span hours rather than seconds. That architecture, with its longer-running sessions and heavier compute demands, is precisely the usage pattern that the $100 plan is priced to capture. A Plus subscriber who uses Codex for extended autonomous engineering tasks hits usage limits well before their billing cycle ends; the $100 plan is designed to be the next logical tier rather than a jump to $200.

The Claude Max comparison

OpenAI made no attempt to obscure the competitive framing. The new plan is priced identically to Anthropic’s Claude Max 5x tier, which also costs $100 per month and includes elevated limits for Claude Code, Anthropic’s terminal-based agentic coding product. Claude Code has become the fastest-growing part of Anthropic’s commercial portfolio, with an estimated $2.5 billion in annualised revenue by early 2026, and Anthropic has been constructing a developer ecosystem around it: Anthropic launched a marketplace for Claude-powered enterprise software in March 2026, with launch partners including Snowflake, Harvey, and Replit, connecting enterprise buyers with third-party applications built on Claude.

The competitive dynamic sharpened further in the week before OpenAI’s announcement. On 4 April 2026, Anthropic banned third-party agents from Claude Pro and Max subscriptions, preventing subscribers from routing their plan’s usage limits through external frameworks such as OpenClaw; users wanting to continue using those tools must now pay separately under a new per-session “extra usage” system. OpenAI’s announcement went in the opposite direction, increasing Codex availability at the $100 price point and doubling it temporarily to mark the launch. The contrast, at the identical price, was visible enough that most coverage described the new plan as a direct response to Anthropic’s developer subscriber base.

What OpenAI’s pricing move signals

The new tier arrives during a period of accelerating commercial momentum for OpenAI. OpenAI’s $122 billion raise at an $852 billion valuation, completed in March 2026, was led by SoftBank, NVIDIA, and Amazon, and included $3 billion from individual retail investors, a structure that many analysts read as groundwork for an IPO expected as early as the fourth quarter of 2026. The company is generating $2 billion in revenue per month and has more than 50 million paid subscribers across its plans. The $100 plan is part of a deliberate effort to fill the pricing gap between $20 and $200 that had, until now, left a large segment of heavy but not enterprise-grade users without a compelling upgrade path.

The model powering the Pro tiers, GPT-5.4, which launched in March 2026 and introduced native computer use directly into Codex and the API, is the clearest statement of where OpenAI sees the next phase of developer adoption going: not prompting, but autonomous agents operating software, navigating file systems, and running multi-step workflows across applications for hours at a time. The $100 plan is the pricing expression of that bet. Whether it moves enough developers at the $100 Claude Max price point to make a measurable difference in Anthropic’s subscriber base will be visible in both companies’ next quarterly metrics.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Google Maps has a long list of hidden (and sometimes, just underrated) features that help you navigate seamlessly. But I was not a big fan of using Google Maps for walking: that is, until I started using the right set of features that helped me navigate better.

Add layers to your map

See more information on the screen

Layers are an incredibly useful yet underrated feature that can be utilized for all modes of transport. These help add more details to your map beyond the default view, so you can plan your journey better.

To use layers, open your Google Maps app (Android, iPhone). Tap the layer icon on the upper right side (under your profile picture and nearby attractions options). You can switch your map type from default to satellite or terrain, and overlay your map with details, such as traffic, transit, biking, street view (perfect for walking), and 3D (Android)/raised buildings (iPhone) (for buildings). To turn off map details, go back to Layers and tap again on the details you want to disable.

In particular, adding a street view and 3D/raised buildings layer can help you gauge the terrain and get more information about the landscape, so you can avoid tricky paths and discover shortcuts.

Set up Live View

Just hold up your phone

A feature that can help you set out on walks with good navigation is Google Maps’ Live View. This lets you use augmented reality (AR) technology to see real-time navigation: beyond the directions you see on your map, you are able to see directions in your live view through your camera, overlaying instructions with your real view. This feature is very useful for travel and new areas, since it gives you navigational insights for walking that go beyond a 2D map.

To use Live View, search for a location on Google Maps, then tap “Directions.” Once the route appears, tap “Walk,” then tap “Live View” in the navigation options. You will be prompted to point your camera at things like buildings, stores, and signs around you, so Google Maps can analyze your surroundings and give you accurate directions.

Download maps offline

Google Maps without an internet connection

Whether you’re on a hiking trip in a low-connectivity area or want offline maps for your favorite walking destinations, having specific map routes downloaded can be a great help. Google Maps lets you download maps to your device while you’re connected to Wi-Fi or mobile data, and use them when your device is offline.

For Android, open Google Maps and search for a specific place or location. In the placesheet, swipe right, then tap More > Download offline map > Download. For iPhone, search for a location on Google Maps, then, at the bottom of your screen, tap the name or address of the place. Tap More > Download offline map > Download.

After you download an area, use Google Maps as you normally would. If you go offline, your offline maps will guide you to your destination as long as the entire route is within the offline map.

Enable Detailed Voice Guidance

Get better instructions

Voice guidance is a basic yet powerful navigation tool that can come in handy during walks in unfamiliar locations and can be used to ensure your journey is on the right path. To ensure guidance audio is enabled, go to your Google Maps profile (upper right corner), then tap Settings > Navigation > Sound and Voice. Here, tap “Unmute” on “Guidance Audio.”

Apart from this, you can also use Google Assistant to help you along your journey, asking questions about your destination, nearby sights, detours, additional stops, etc. To use this feature on iPhone, map a walking route to a destination, then tap the mic icon in the upper-right corner. For Android, you can also say “Hey Google” after mapping your destination to activate the assistant.

Voice guidance is handy for both new and old places, like when you’re running errands and need to navigate hands-free.

Add multiple stops

Keep your trip going

If you walk regularly to run errands, Google Maps has a simple yet effective feature that can help you plan your route in a better way. With Maps’ multiple stop feature, you can add several stops between your current and final destination to minimize any wasted time and unnecessary detours.

To add multiple stops on Google Maps, search for a destination, then tap “Directions.” Select the walking option, then click the three dots on top (next to “Your Location”), and tap “Edit Stops.” You can now add a stop by searching for it and tapping “Add Stop,” and swap the stops at your convenience. Repeat this process by tapping “Add Stops” until your route is complete, then tap “Start” to begin your journey.

You can add up to ten stops in a single route on both mobile and desktop, and use the journey for multiple modes (walking, driving, and cycling) except public transport and flights. I find this Google Maps feature to be an essential tool for travel to walkable cities, especially when I’m planning a route I am unfamiliar with.


More to discover

A new feature to keep an eye out for, especially if you use Google Maps for walking and cycling, is Google’s Gemini boost, which will allow you to navigate hands-free and get real-time information about your journey. This feature has been rolling out for both Android and iOS users.



Source link