Here’s how to figure out which ESP32 model you should buy


The ESP32 is a hobbyist’s dream that comes in a broad range of configurations. It can be hard to know where to begin, but there are some simple rules you can follow to match the intended use with a classic, C3, C6, S3, or other variants.

ESP32 variants explained

The “stock” ESP32 was released in 2016 as a follow-up to the ESP8266. It’s a microcontroller onto which firmware can be flashed to perform a variety of tasks. These boards are commonly found in smart home sensors (both DIY and retail), attached to small displays and E-Ink panels as dashboards, used to power retro handheld consoles, and in other projects where size and power efficiency are more important than raw performance.

This original model features Wi-Fi, Bluetooth 4.2, 520KiB of SRAM, 34 GPIO pins, a temperature sensor, and 448KiB on-board storage. It was something of a revolution when it arrived, but the ESP32 family has since seen a flurry of new arrivals with a few models that really stand out.

“KiB” refers to “kibibyte,” which translates to 1024 kilobytes (KB).

Consider the C3, S3, and C6 first

The ESP32-C3 stands out for its ultra-low power design. It only has a single core processor that runs at 160MHz (down from 240MHz on the original), with fewer GPIO pins than the base model. But its efficiency means it’s perfect for battery-powered projects, like sensors that won’t be plugged into the wall or mobile handheld devices.

Seeed Studio XIAO ESP32C6 with blurred out microelectronics in the background. Credit: Seeed Studio

On the contrary, the ESP32-S3 features a dual-core processor, a whopping 45 GPIO pins, support for more intensive processes like machine learning and AI projects, and camera support. It’s one of the most feature-packed versions of the chip, but it comes at the cost of the C3’s unbeatable power efficiency, and it’s a little more expensive than its siblings.

For smart home use, consider the ESP32-C6. This board was released in 2025 and introduces Zigbee and Thread support, perfect for building sensors and other devices that communicate with low-power mesh networks. The C5, C61, and H2 also feature Zigbee and Thread support (with the H2 being a real outlier that lacks Wi-Fi altogether).

C2 and S2 offer cost savings

The C2 and S2 variants are cheaper, lower-cost versions of the C3 and S3 boards.

The C2 has a slower clock speed (120MHz), less SRAM and ROM (272KiB and 128KiB respectively), only 20 GPIO pins, and no temperature sensor. The S2 drops the S3 down to a single-core 240MHz CPU, 320KiB and 128KiB of RAM and ROM respectively, and lacks Bluetooth altogether.

Predictably, these boards should cost even less than their already cheaper “full-fat” variants. They could be worth considering if you’re building a whole fleet of devices that suit their specs and want to keep costs as low as possible.

The P4 is a powerful outlier

If you need as much power as an ESP32 can get, the P4 is the one to go for. It packs a 400MHz dual-core processor, more SRAM and ROM than most other variants, support for Ethernet but no built-in Wi-Fi, and improved audio capabilities. It’s often favored in projects that use Power-over-Ethernet.

Many of these chips end up in pricier embedded offerings, like the Seeed Studio reTerminal D1001, which includes a separate ESP32-C6 for wireless connectivity.

Picking official or third-party boards

You might be tempted to stick to “official” ESP32 offerings that are manufactured by the company that designs and manufactures the chip, Espressif Systems. The company produces reference boards that include the chip on a circuit board and a USB connector, but at the same time, they release the schematics and other materials necessary for other companies to copy their work and build upon it.

The company then sells the core ESP32 chips directly to other companies, which is a win for everyone. When you choose a third-party ESP32 development board, you’re not buying a cheaper product. In fact, you’ll usually pay a bit more than you would with an “official” development board, but you’ll get some nice bonus features along the way.

The XIAO ESP32-S3 on the reSpeaker Lite development board. Credit: Adam Davidson / How-To Geek

These improvements include USB-C ports (instead of microUSB), better antennas for improved wireless performance, charging circuitry, and smaller, often purpose-built form factors. For example, Seeed Studio’s XIAO lineup is popular among How-To Geek writers since it’s smaller than the reference and ditches microUSB for the superior USB-C.

Confused? Match the board with the project

If you’re buying a board for a specific project, stick to the brief. Though the S3 is something of a jack-of-all-trades, for scaling up projects (like deploying a house full of Bluetooth proxies for presence detection), the savings can really add up if you opt for a cheaper chip.

This point stands with regard to third-party boards. The cost soon adds up when you start to spend a bit more on “nicer” development boards, but that cost is often worth it, particularly if you want a specific form factor like an ESP32 and a touchscreen display in one.

  • MakerHawk Heltec V3 LoRa board with battery.

    Brand

    MakerHawk

    Operating System

    Meshtastic

    This ESP32 kit includes everything you need for connecting to your local Meshtastic network, or any other LoRa-based tech project. There’s an LED display, a 1100mAH battery, and multiple antennas.



Alternatively, buy the lot

The fact of the matter is that ESP32 boards are cheap, and there’s usually only a couple of dollars separating the C2 and S2 from the C6 and S3. The outlier here is the P4, which is often found in much pricier configurations.

If you’re keen to experiment, why not buy a few of each and see where the wind takes you? You can start with some ESP32 projects that take less than an hour.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


The battle between AMD and NVIDIA rages on eternally, it seems, though it’s rather a one-sided battle in the desktop PC market, where NVIDIA holds something like 95%, and AMD most of what’s left apart from Intel’s (almost) 1%.

But as dominant and popular as NVIDIA is, AMD proponents could always raise the value argument. On a per-dollar basis, you get more value with an AMD card, and even better, you have the benefit of AMD “FineWine” which ensures your card will become even better with time.

What “FineWine” meant—and why it mattered

FineWine was something that AMD fans began to notice during the GCN (Graphics Core Next) architecture. Incidentally, the last AMD dedicated GPU I bought was the R9 390, which was of that lineage. Since then, all my AMD GPUs have been embedded in consoles or handheld PCs, but I digress.

The R9 390 is actually a good example of FineWine. Launched in 2015, like many AMD cards, the R9 390 had a rough start, and I sold mine in exchange for a stopgap card in the form of the RTX 2060, because I wanted to play Cyberpunk 2077 on PC, where it wasn’t broken the way it was on consoles. Even though, on paper, the raw power of the RTX 2060 wasn’t much more than a 390, the AMD card’s performance on my (then) 1080p monitor was a stuttery mess, whereas everything suddenly ran great on my 2060 the minute the AMD GPU was expunged from the system.

But, a decade later, that same game is perfectly playable on this card, as you can see in this TechLabUK video.

A lot of it is because the developers have kept patching and improving the game, but this is something you see across the board for AMD cards on various games. This is FineWine. Years later, with continued driver updates from AMD, the cards go from being a little worse than their NVIDIA equivalent at launch to being as good or even a little better in the long run.

Of course, that’s not super helpful to customers who buy hardware at launch, but it has given some AMD users computers with longer lifespans than you’d think, and made many used AMD cards an even better bargain.

Why AMD’s FineWine era worked

A bit of smoke and mirrors

The PULSE AMD Radeon RX 6800 XT next to an AMD RX 6600 XT Phantom Gaming D. Credit: Ismar Hrnjicevic / How-To Geek

FineWine wasn’t magic, of course. The phenomenon was the result of a mix of factors. AMD’s architectures were in some cases a little too forward-thinking for the APIs of the day. Massively parallel with a focus on compute, they’d only come into their own with DirectX 12 and more modern games. NVIDIA’s cards at the time were better optimized to run current games well. Over time, NVIDIA cards would make similar architectural changes, but with better timing.

The other reason FineWine was a thing came down to driver maturity. As a much smaller company with fewer resources, it seems that AMD had some trouble releasing cards with optimized drivers. So, over time, the card would start performing as intended.

In both cases, you could frame FineWine not as the card getting better, but rather getting “less worse” over time. If you set the bar low at launch, the only way is up. However, there’s a third factor to take into account as well. AMD dominates console gaming. The two major home console series have now run on AMD GPUs for two generations, and so games are developed with that hardware in mind. This also gives newer titles a bit of a leg up, though it’s hard to know exactly by how much.

How AMD moved on from FineWine

It seems worse, but it’s actually better

An AMD RX 9070 XT Gigabyte gaming graphics card. Credit: Ismar Hrnjicevic / How-To Geek

With the shift to RDNA architecture, AMD made a deliberate change in philosophy. Modern Radeon GPUs are designed to perform well right out of the gate. Reviews on day one are much closer to what you could expect years later. There are still decent gains to be had on RDNA cards with game-specific optimizations (Spider-Man on PC is a great example), but the golden age of FineWine seems to be in the past now.

That’s a good thing! Products should put their best foot forward on day one, so let’s not shed a tear for FineWine in that regard. So it’s not so much that AMD doesn’t care about improving the performance and stability of older cards over the years, it’s that the company is now better at its job, and so there’s less room for improvement.

Sapphire NITRO+ AMD Radeon RX 9070 XT GPU

Cooling Method

Air

GPU Speed

2520Mhz

The AMD Radeon RX 9070 XT from Sapphire features 16GB of DDR6 memory, two HDMI and two DisplayPorts, and an overengineered cooling setup that will keep the card cool and whisper quiet no matter the workload.


NVIDIA kept the idea—but changed the formula

It’s all about AI

It’s funny, but these days I think of NVIDIA cards as the ones with major longevity. Take the venerable GTX 1080 and 1080 Ti cards. These cards only lost game-ready driver support in 2025, which doesn’t immediately make them useless, it just means no more optimization for those chips. What an incredible run, getting a decade of relevant game performance from a GPU!

But, that’s not really NVIDIA’s take on FineWine. Instead, the company has taken to adding new and better features to its cards long after they’ve been launched. Starting with the 20-series, the presence of machine-learning hardware means that by improving the AI algorithms for technologies like DLSS, these cards have become more performant with better image quality over time.

While NVIDIA has made some features of its AI technology exclusive to each generation, so far all post 10-series GPUs benefit from every new generation of DLSS. Compare that to AMD which not only offers inferior versions of this new upscaling technology, but has locked the better, more usable versions to later cards, such as the case with FSR Redstone.


FineWine is an ethos, not a brand

In the case of my humble RTX 4060 laptop, the release of DLSS 4.5 has opened new possibilities, notably the ability to target a 4K output resolution, which was certainly not on the table when I first took this computer out of the box. We might not call it “FineWine,” but it sure smells like it to me!



Source link