Energy Analyst Explains the Future of California’s Grid



Every evening between 5 and 6 PM across California, the sun starts to set, office workers head home, ovens click on, and the electricity grid enters one of its most demanding periods of the day. Electricity prices rise, natural gas plants ramp up quickly, and utilities work to balance supply and demand in real time.

For everyday consumers, this evening window increasingly affects how and when they use connected devices, from smart thermostats adjusting temperature settings to electric vehicles scheduling their charging cycles to avoid peak pricing.

Energy researchers often describe this moment as a daily stress test for a power system that now relies heavily on renewable generation.

California’s grid operates under a unique mix of technologies and market rules. Solar power now provides a large share of daytime electricity, while natural gas plants still play a role in balancing supply when demand spikes after sunset.

According to energy analysts who study power markets, the behavior of the grid is often shaped as much by economics as by energy policy. These dynamics are becoming more visible at the household level as utility pricing models and consumer-facing apps give users greater insight into when electricity is cheapest or most expensive to use.

Why Solar Is a Double-Edged Asset

Unlike New England, California’s grid does not rely on coal or oil. The fuel mix is cleaner but not simpler.

Solar power enters the grid at extremely low marginal cost, which means it is typically dispatched first whenever it is available. Natural gas plants then fill in the remaining demand as needed.

Energy market researcher Neel Somani reports that this structure creates an unusual pricing dynamic: the cost of electricity is often determined by whichever generator must be activated last to meet demand.

“So there’s renewables, and there’s natural gas units, but in California, you don’t have any of that other junk, like coal or other dirtier units,” says Somani. “When you have some amount of demand, you basically first meet it with renewables, which are basically $0 marginal cost, and then you turn on less and less efficient natural gas units until you’ve met all of the demand.”

This structure, managed by the California Independent System Operator (CAISO), means that electricity pricing on any given day is largely determined by one question: how inefficient does a natural gas plant need to be before the grid can no longer meet demand without it? When solar is abundant, the answer is very inefficient, meaning prices stay low. When the sun goes down, the answer changes quickly.

Solar generation peaks in the middle of the day, flooding the grid with cheap electricity when residential and commercial demand is at its lowest. Then, as the sun sets, that flood of $0 marginal cost power disappears almost entirely, right as people return home and flip on every device in their households. The result is what grid operators call the “duck curve,” a visual representation of net electricity demand that dips sharply at midday and then arches dramatically upward in the evening.

For consumers, this pattern is increasingly reflected in time-of-use pricing, where running appliances like dishwashers, laundry machines, or home charging systems during midday hours can result in noticeably lower energy costs.

The 5 PM Problem

“When people get home at 5 PM, they turn on their lights, their TVs, their ovens, all at once, so it creates a demand spike at 5 PM. So if you look at the power price chart, you’ll see that around 5 PM there’s always a spike, and then it settles down in the late evening hours,” Somani explains.

That spike is made worse, not better, by solar. The issue is the system design. To meet the sudden surge in demand that follows sunset, grid operators must dispatch gas turbines. But the fastest gas turbines available, called simple cycle gas turbines, are also the least efficient. Combined cycle gas turbines are more efficient but take longer to bring online.

“There are basically two types of gas turbines,” says Somani. “There are combined cycle gas turbines and simple cycle gas turbines, and the ones that turn on really fast are the simple cycle gas turbines, but they’re also less efficient. So as a result, we end up with an even higher evening price than we’d have without renewables.”

The irony is real. The same solar buildout that has made California a national leader in clean energy has, in some respects, made its evening prices more volatile. The more solar floods the system during the day, the steeper the ramp that conventional generators must cover when it disappears.

The U.S. Energy Information Administration has tracked this trend carefully, noting that as California’s solar capacity grows, the midday dip in net load continues to fall, creating a steeper climb back to evening demand levels. Grid operators face a ramp that can span 10 to 17 gigawatts within a three-hour window, a feat requiring precise coordination across dozens of generating assets.

The Geography of the Problem: NP15 and SP15

California’s grid challenges are not evenly distributed. The state’s transmission infrastructure divides it into two major pricing zones: Northern California, referred to as NP15 (North Path 15), and Southern California, known as SP15 (South Path 15). The two zones are connected by a transmission corridor called Path 15, and when that line becomes congested, wholesale prices in the two regions diverge.

As Neel explains, “Northern California is pretty much always an importer. It imports as much as possible from the Pacific Northwest, because they produce a lot of hydro power, and it will import from Southern California.” Southern California, by contrast, shifts between exporting and importing depending on seasonal conditions and daily demand patterns.

This regional topology matters enormously for grid operators and for energy traders. A price spike in Southern California does not automatically translate to relief in the north if transmission capacity is constrained. Managing these bottlenecks is part of what makes CAISO one of the most complex grid operators in the world.

Batteries: The Answer Hiding in Plain Sight

Battery storage has increasingly become one of the main tools used to address the evening demand surge. “Energy arbitrage is the most common answer. Batteries buy that cheap solar power during the daytime, they dispatch it in the evening, and they make that difference as profit.”

Similar principles are now being applied at the residential level, where home battery systems paired with rooftop solar allow households to store cheaper daytime energy and use it later, reducing reliance on higher-priced evening electricity.

The economics of this trade-off are straightforward, and they have attracted substantial private capital. California reached a major milestone in 2025, becoming the first state to deploy 10 gigawatts of battery storage capacity. According to data from the Atlantic Council, battery capacity as a share of solar generation capacity in CAISO rose to 41 percent by late 2023, and the buildout has continued since. By mid-2024, batteries were supplying an average of 6 gigawatts of power during the 8 to 9 PM hour, double the level from the year prior.

The practical effect has been a visible flattening of the duck curve’s steepest section. At the SoCal Citygate gas hub, average daily natural gas prices fell from nearly $9 per MMBtu in April 2023 to approximately $4 per MMBtu in 2024, in part because batteries were displacing natural gas generation, which had previously been the only tool available to fill the evening gap. Solar curtailment, which once rose steadily as generation capacity expanded, has also fallen in relative terms, as more of the midday surplus is stored rather than wasted.

What This Means for Governance and the Grid

The story of California’s grid highlights how market design, infrastructure, and technology interact in complex energy systems. The evening spike is not a natural phenomenon. It is the emergent result of design choices: how generation assets are compensated, how transmission infrastructure is built, and how incentives are aligned across a decentralized system of producers, grid operators, and consumers.

Neel has argued consistently that well-designed competitive structures outperform single-gatekeeper systems in domains characterized by complexity and rapid change. The same philosophy applies to grid governance. California’s shift toward time-of-use pricing, where utilities like Pacific Gas and Electric charge consumers more during peak hours, is one example of aligning individual incentives with system-wide needs. When consumers pay more at 6 PM than at noon, they have a direct financial reason to run their dishwashers earlier in the day or charge their electric vehicles overnight.

The CPUC estimates that nearly 3 gigawatts of combined behind-the-meter solar and storage systems are active across California, with battery-equipped solar installations accounting for more than 30 percent of new residential installations in the wake of NEM 3.0 policy changes.

The Road Ahead

California’s grid problem is not solved. It is, however, improving in ways that would have seemed ambitious just five years ago. The duck curve is being flattened by the very financial incentives that Neel Somani describes: actors buying cheap daytime power and selling it back at the evening premium, capturing profit while simultaneously stabilizing the system.

California’s energy resilience is distributed across millions of rooftop systems, grid-scale batteries, and demand-response participants. This distributed grid poses both challenges and opportunities. 

As more households adopt connected energy technologies, from EVs to smart panels and battery systems, the relationship between grid performance and everyday consumer behavior is becoming increasingly interconnected. 

The 5 PM spike remains a daily challenge. But it is, for the first time, a challenge that markets are beginning to solve.

Digital Trends partners with external contributors. All contributor content is reviewed by the Digital Trends editorial staff.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link