This hidden Linux feature makes Windows look embarrassing for developers


For years, we have watched Microsoft pour enormous resources into the Windows Subsystem for Linux. It was positioned as the great equalizer, the bridge that would finally make Windows a first-class citizen for those of us who have long preferred Linux.

WSL is undeniably impressive. Having a Linux kernel running alongside Windows with this level of integration is a feat of engineering. Yet, there is a feature so fundamental to Linux, so deeply woven into its architecture, that even the most sophisticated virtualization layers cannot replicate its elegance.

It is not a flashy UI or a trendy framework but native, granular, and transparent control over process resources through cgroups, exposed via a simple filesystem interface. This capability is the foundation of modern containerization, and it represents a level of systemic transparency that makes the Windows approach to resource management look not just different, but genuinely embarrassing by comparison.

The cgroup filesystem

Control resources through simple files

Cgroups allow you to allocate, limit, and monitor system resources such as CPU, memory, and I/O across groups of processes (all the things you usually care about). That alone is not unusual, as most operating systems provide some mechanism for resource control.

What distinguishes Linux is how this control is exposed. Cgroups appear as a filesystem, typically mounted at /sys/fs/cgroup. Managing resources becomes an interaction with files and directories, and to create a constrained environment, you create a directory, write values into control files, and assign processes to that directory.

You can limit a process to a fixed CPU quota and memory ceiling with a handful of shell commands without involving any compilation, API calls, or scaffolding. The system responds immediately and predictably (which is rarer than it should be). This is not just convenient but also changes how you think about the system. Resource management becomes something you can experiment with directly, not something hidden behind layers of tooling.

On Windows, the closest equivalent is job objects (the part most people vaguely remember exists). They allow grouping processes and applying limits, but the interface is entirely different. Interaction happens through the Windows API, requiring code in C, C++, or .NET. Functions such as CreateJobObject and SetInformationJobObject must be called, handles managed, and errors handled explicitly.

Even simple constraints require nontrivial setup. Command-line usage is indirect, usually wrapped through PowerShell or custom utilities. As a result, most developers never engage with these primitives directly. They rely on higher-level tools that obscure the underlying mechanisms.

The foundation of containers

Why containers feel native on Linux

Cgroups are not an isolated feature. Along with namespaces, they form the basis of containers. When a container runs on Linux, there is no extra abstraction layer enforcing limits (no extra box inside a box). The container runtime creates a cgroup, writes constraints, and places processes inside it and the kernel does the rest.

On Windows, containerization follows a different path. Many deployments rely on Hyper-V isolation, which introduces a virtual machine layer even when the interface suggests something lightweight.

This provides isolation but adds complexity and overhead. Even in process isolation mode, Windows relies on a combination of job objects and other subsystems that were not designed as a unified interface. The pieces exist, but they do not present a coherent model. A developer cannot navigate a single directory and observe resource limits in real time. Instead, information is scattered across APIs and administrative tools (spread thin).

This difference becomes obvious when debugging. On Linux, resource constraints are visible and editable through the filesystem. On Windows, understanding those constraints requires navigating tooling that was never designed for simple inspection.

Transparency and system design

Different philosophies shape the experience

Linux tends to expose kernel functionality through simple, consistent interfaces. The filesystem abstraction is used repeatedly because it is composable and familiar. This lowers the barrier to entry, and a developer who understands basic shell commands can experiment with resource limits quickly. Windows has historically favored abstraction, and complexity is hidden behind APIs and managed interfaces. This produces a polished surface but limits direct control.

The job object system is powerful, but it requires commitment to understand (and a lot of patience, a lot). Performance data is available, but often through fragmented systems such as performance counters and WMI. These pieces were developed independently and do not present a unified model.

The result is a system where capabilities exist but are not easily discoverable or composable, and developers interact with tools rather than the system itself. When you are on Linux and a process acts up, you can immediately peek into its cgroup to see exactly what’s hitting a limit. On Windows, that same investigation feels like a chore, forcing you to navigate several different tools just to find the same answers.

The WSL paradox

Linux inside Windows proves the point

WSL attempts to bridge this gap by embedding Linux inside Windows. It succeeds in providing access to Linux tooling, but it also highlights the underlying limitation. When you run containers inside WSL, you are not using Windows resource management. You are using Linux cgroups inside a Linux kernel running in a virtualized environment.

3D Tux penguin standing with large blue 'WSL' text and a Windows logo overhead.


WSL is good, but it’s still not enough for me to go back to Windows

It’ll take more than a virtual Linux machine to get me to put up with Copilot.

The Windows host remains separate, and its native mechanisms are not part of that workflow. To provide the environment developers expect, Windows imports Linux rather than extending its own model. Docker Desktop reflects the same pattern. Containers run inside a Linux virtual machine. The experience feels native, but the underlying functionality is not provided by Windows itself.

Framework desktop.

Brand

Framework

CPU

AMD Ryzen AI Max 300-series


Practical consequences

Where this difference actually shows up

These differences show up for me in everyday development. When you are on Linux, running a local Kubernetes cluster is straightforward because tools like kind or Minikube use the host kernel directly. Your resource limits behave exactly as they will in production, and you can debug everything using standard system tools. On Windows, that same setup usually ends up tucked inside a virtual machine, and you are constantly forced to account for that extra layer between your workload and the hardware, which inevitably mediates how resources actually behave.

Minikube logo


How to Start a Local Kubernetes Cluster With Minikube

Minikube is a minimal Kubernetes distribution designed for local development use.

When something fails, you can’t just look at the container; you have to worry about the whole orchestration system and the virtualization environment simultaneously. You can see the same pattern in CI systems. On Linux, you can enforce limits via cgroups with almost zero overhead and manage the configuration with simple scripts.

Windows runners, by contrast, always seem to require more setup. Whether it’s specialized APIs, extra scripting layers, or full virtualization, the system is capable but never quite as direct. Over time, that friction adds up, which is why simpler systems are so much easier to maintain and reason about.

A structural difference

Why this gap is hard to close

What makes the cgroup feature particularly embarrassing for Windows is that it reveals something fundamental about the trajectory of operating system design in the era of cloud computing and containerization.

Linux was not designed from the outset with containers in mind (contrary to popular belief). The cgroup functionality emerged incrementally, added by kernel developers who recognized the value of providing granular resource control through simple interfaces. Yet the feature fits so naturally within the Linux philosophy that it feels as though it has always been there.

A Linux Terminal running Bash.


The 6 test patterns that real-world Bash scripts actually use

Check if a file is really a file, whether a string contains anything, and whether you can run a program with these vital patterns.

The filesystem interface, the text-based control files, the ability to compose functionality with simple scripts, all of these characteristics align perfectly with the Unix traditions that Linux inherited and extended. Windows lacks this coherence when it comes to resource management and containerization. The features exist, in some form, scattered across the system, but they lack the unified vision and consistent interface that make Linux cgroups so powerful and so accessible.


A practical reality you cannot ignore

Microsoft has invested enormous resources in developing Windows containers, in improving Docker integration, and in building WSL, yet these efforts cannot overcome the fundamental architectural decisions made decades ago (history has momentum). The company is essentially trying to retrofit modern containerization capabilities onto a system designed for a different era, whereas Linux evolved alongside the containerization movement, increasing the capabilities that developers needed in a natural and coherent way.

I don’t expect Microsoft to rewrite the NT kernel to mirror the Unix philosophy; the momentum of decades is a difficult thing to pivot out of. As long as my primary interaction with a system involves navigating layers of abstraction just to see why a process is hitting a wall, the “first-class” label for Windows development feels more like a marketing goal than a technical reality. WSL is a brilliant bridge, but it’s ultimately a confession that the host’s own primitives weren’t built for the way we work now.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Google Maps has a long list of hidden (and sometimes, just underrated) features that help you navigate seamlessly. But I was not a big fan of using Google Maps for walking: that is, until I started using the right set of features that helped me navigate better.

Add layers to your map

See more information on the screen

Layers are an incredibly useful yet underrated feature that can be utilized for all modes of transport. These help add more details to your map beyond the default view, so you can plan your journey better.

To use layers, open your Google Maps app (Android, iPhone). Tap the layer icon on the upper right side (under your profile picture and nearby attractions options). You can switch your map type from default to satellite or terrain, and overlay your map with details, such as traffic, transit, biking, street view (perfect for walking), and 3D (Android)/raised buildings (iPhone) (for buildings). To turn off map details, go back to Layers and tap again on the details you want to disable.

In particular, adding a street view and 3D/raised buildings layer can help you gauge the terrain and get more information about the landscape, so you can avoid tricky paths and discover shortcuts.

Set up Live View

Just hold up your phone

A feature that can help you set out on walks with good navigation is Google Maps’ Live View. This lets you use augmented reality (AR) technology to see real-time navigation: beyond the directions you see on your map, you are able to see directions in your live view through your camera, overlaying instructions with your real view. This feature is very useful for travel and new areas, since it gives you navigational insights for walking that go beyond a 2D map.

To use Live View, search for a location on Google Maps, then tap “Directions.” Once the route appears, tap “Walk,” then tap “Live View” in the navigation options. You will be prompted to point your camera at things like buildings, stores, and signs around you, so Google Maps can analyze your surroundings and give you accurate directions.

Download maps offline

Google Maps without an internet connection

Whether you’re on a hiking trip in a low-connectivity area or want offline maps for your favorite walking destinations, having specific map routes downloaded can be a great help. Google Maps lets you download maps to your device while you’re connected to Wi-Fi or mobile data, and use them when your device is offline.

For Android, open Google Maps and search for a specific place or location. In the placesheet, swipe right, then tap More > Download offline map > Download. For iPhone, search for a location on Google Maps, then, at the bottom of your screen, tap the name or address of the place. Tap More > Download offline map > Download.

After you download an area, use Google Maps as you normally would. If you go offline, your offline maps will guide you to your destination as long as the entire route is within the offline map.

Enable Detailed Voice Guidance

Get better instructions

Voice guidance is a basic yet powerful navigation tool that can come in handy during walks in unfamiliar locations and can be used to ensure your journey is on the right path. To ensure guidance audio is enabled, go to your Google Maps profile (upper right corner), then tap Settings > Navigation > Sound and Voice. Here, tap “Unmute” on “Guidance Audio.”

Apart from this, you can also use Google Assistant to help you along your journey, asking questions about your destination, nearby sights, detours, additional stops, etc. To use this feature on iPhone, map a walking route to a destination, then tap the mic icon in the upper-right corner. For Android, you can also say “Hey Google” after mapping your destination to activate the assistant.

Voice guidance is handy for both new and old places, like when you’re running errands and need to navigate hands-free.

Add multiple stops

Keep your trip going

If you walk regularly to run errands, Google Maps has a simple yet effective feature that can help you plan your route in a better way. With Maps’ multiple stop feature, you can add several stops between your current and final destination to minimize any wasted time and unnecessary detours.

To add multiple stops on Google Maps, search for a destination, then tap “Directions.” Select the walking option, then click the three dots on top (next to “Your Location”), and tap “Edit Stops.” You can now add a stop by searching for it and tapping “Add Stop,” and swap the stops at your convenience. Repeat this process by tapping “Add Stops” until your route is complete, then tap “Start” to begin your journey.

You can add up to ten stops in a single route on both mobile and desktop, and use the journey for multiple modes (walking, driving, and cycling) except public transport and flights. I find this Google Maps feature to be an essential tool for travel to walkable cities, especially when I’m planning a route I am unfamiliar with.


More to discover

A new feature to keep an eye out for, especially if you use Google Maps for walking and cycling, is Google’s Gemini boost, which will allow you to navigate hands-free and get real-time information about your journey. This feature has been rolling out for both Android and iOS users.



Source link