3 things I automate with local AI that I’d never trust ChatGPT with


Cloud AI is powerful but not private. Local AI is private but less powerful. That trade-off is real, and trying to pick one over the other is the wrong framework. A better use of your time is to find tasks that require privacy, but not as much model intelligence, and then have local AI models automate them for you. Here are three such tasks that I’ve automated using on-device LLMs.

What is the local AI setup I’m using?

The hardware and software stack behind all three workflows

I’m using LM Studio as the main interface. It’s a simple graphical app that lets you download and run language models locally without touching a terminal. The model I’m running is Qwen 3.5 9B at 4-bit quantization, and I’m using it because it supports both vision (so it can analyze images) and tool calling (so it can actually do things, like write to files or talk to apps).

My machine is a Ryzen 5 5600G with 32GB of RAM and an RTX 3060 with 12GB of VRAM. If yours is in the same ballpark, these workflows should run fine. If you have a smaller GPU, you can try a smaller model. Qwen comes in different sizes and most of these workflows work even at lower parameter counts.

On top of LM Studio, I’ve also set up MCP (Model Context Protocol) servers. These are what give the model access to different tools, like your computer’s filesystem, or external apps like Notion and Asana. Without MCPs, the model can only talk to you—but with MCPs, it can actually do things for you.

Finally, I have an AI layer for voice processing and transcriptions. For this, I’m using Whisper with the RealtimeSTT Python library. It’s terminal-based, which can sound intimidating, but it’s fast and reliable. I used Claude to vibe code a Python script that lets me either drop in an audio file and get a transcription, or speak in real-time and have it transcribed when I’m done talking. That said, if you don’t want to deal with coding or the terminal, you can try OpenWhispr. It’s a bit slower in my experience, but completely graphical, and extremely user-friendly.


The Open Source logo above faded icons of discontinued open-source apps, including Atom, Brackets, Google Code, and OpenOffice, set against a worn blue textured background.


5 free, open-source apps that save me hundreds of dollars and hours of work

Your wallet called. It wants you to read this.

Log all your receipts into a budgeting CSV without typing a single thing

Screenshots in—spreadsheets out

Receipt photos and handwritten notes processed by LM Studio into a LibreOffice Calc budgeting CSV with Date, Merchant, Amount, and Category columns.

Traditional budget tracking involves sitting with all your receipts at the end of the day—or at the end of the week—and jotting down all your spending in a notebook or a spreadsheet. While some people genuinely love doing this and even find it meditative, for others this is an absolute chore—and a bore. If you feel negatively about punching numbers into a cell, but would prefer a comprehensive overview of your spending and finances, then you can use LLMs to help you out.

The first step is having a list of everywhere you spent money. Most payments leave some kind of trail. If you paid on your phone, the transactions should be logged inside your Apple Pay or Google Pay. Simply grab a screenshot of the payment confirmation. If you paid with cash, you should have a paper receipt. You can snap a picture of that.

Next, drop all of those screenshots and photos into LM Studio with Qwen 3.5 loaded. With an instructive prompt, the LLM can scan through those images one by one, read the relevant information—merchant, date, amount, category—and write that data directly to a CSV file using the filesystem MCP server. If the CSV already exists, it appends new rows—if it doesn’t, it creates one.

Here’s the prompt I use for this:

You have access to the filesystem. In this path I keep all my finances: [full file path]

I'm attaching a set of receipt images or payment screenshots. For each one, extract the following: 
- merchant name
- date
- total amount
- category (e.g. food, transport, utilities, entertainment).

Once you've extracted the data from all images, append it to the budgeting CSV file in the provided path in the format: Date, Merchant, Amount, Category. If the file doesn't exist, create it with those column headers first.

Don't ask for confirmation. Just process each image and write the data.

One thing worth knowing: crumpled receipts or ones with handwritten totals can occasionally be misread. I give the output a quick scan before closing the file—takes maybe 30 seconds but ensures there are no errors.

Turn unstructured voice recordings into structured written notes

Give your messy transcription a Zettelkasten makeover

Voice transcription about sleep and screen stimulation processed by LM Studio into structured Atomic Notes markdown saved to a local folder.

I prefer talking to typing when I’m working through a big idea. It’s faster and less strenuous on my wrists. The problem is that my voice dumps tend to be extremely unstructured, filled with filler words, and just terrible for storage and retrieval. If you can relate to this problem, then this workflow will is for you.

First, record your voice using your phone or dedicated voice recorder, whatever you prefer. Then transcribe it using Whisper, which will leave you with your messy thought dump written in text. Finally, push that messy thought dump through an LLM to structure it.

Depending on the content, especially if it was a really long thought dump, you can instruct the LLM to break it up into multiple Zettlekasten-style atomic notes—small, self-contained notes that each cover one idea. That format works well if you’re building a knowledge base rather than just capturing a one-off thought.

From there, the model can either save the notes directly to my computer as markdown files using the filesystem MCP server, or push them into Notion using the Notion MCP server. If you use Obsidian, pointing the filesystem MCP at your vault folder means your notes land there automatically, ready to link and build on.

Here’s the prompt I use:

Below is a raw voice transcription. It's unstructured and may be repetitive or rambling—that's expected.

Your job is to reorganize this into clear, structured notes. Break it into logical sections with headers. Under each header, use bullet points for the key ideas.

If the content contains distinct self-contained ideas, also produce a set of atomic notes at the end—each one a single idea with a short title, written in 2-4 sentences.

Save the structured notes as a markdown file at [YOUR FOLDER PATH]/notes/[auto-generate a descriptive filename based on the content].md

Transcription:
[PASTE TRANSCRIPTION HERE]

The result isn’t always perfect, but it’s consistently useful. Even if I need to edit 20% of what comes out, I’m still spending far less time than I would typing out these notes.


A Pixel 10 with Claude open and an iPad Air with  an Obsidian second brain open.


How Claude fixed my messy Obsidian vault in 5 minutes (prompts included)

Your second brain has become a second junk drawer. Claude can fix that.

Use local AI as your personal task router

Stop manually triaging tasks across apps—let the model do it

LM Studio chat using the Notion MCP to add a game entry to a Notion Wishlist database, with the new page highlighted in Notion.

If you’re anything like me, you probably use multiple productivity apps—Notion for project planning, Asana for work tasks, Todoist for quick personal to-dos, and Google Calendar for anything time-sensitive. Each of these apps is better at something than the others. There’s no app that is just objectively better at everything. In fact, I’d argue most people stick to using just one app not because they want to, but because maintaining multiple apps is just too much work.

If you share my sentiments, then you’d be happy to know that a local LLM can work as a task router.

The idea is straightforward. You dump your tasks—in whatever form they’re in, rough or structured—into the model. With the right prompt and the MCP servers connected, it distributes those tasks across your apps automatically. Professional tasks go to Asana, personal projects go to Notion, and deadlines go to Google Calendar. You describe your preferences once, and it handles the sorting from there.

The way I use this ties directly into the previous workflow. My voice recordings, once processed into structured notes, get saved to my Obsidian vault. That vault acts as my source of truth—the place where everything lands before it goes anywhere else. From there, the LLM reads the new notes, identifies anything that’s an actionable task, and routes it to the right app based on my preferences.

If the apps you use have MCP servers available—and a lot of them do—connecting them to LM Studio takes a few minutes. However, if an app doesn’t have an official MCP server but has an API, you can potentially build a custom one. Vibe coding an MCP server is more approachable than it sounds, and Claude is particularly good at helping with it—especially considering Anthropic (Claude’s developers) developed this standard.


We shouldn’t be dependent on ChatGPT for everything

All three of these workflows have the same thing in common—they involve working with data I wouldn’t want to feed into a third-party AI service. I don’t want ChatGPT or Gemini to know where I’m spending my money, or about my thoughts and projects. Running a local model means I get intelligent processing on that data without it leaving my machine.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


The Windows Insider Program is about to get much easier

Ed Bott / Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Microsoft is making the Insider Program less complicated.
  • Beta channel will be a more reliable preview of the next retail release.
  • Other changes will allow testers to quickly enable/disable new features.

Last month, Microsoft took official notice of its customers’ many complaints about Windows 11. Pavan Davaluri, the executive vice president who runs the Windows and Devices group, promised sweeping changes to Windows 11. Today, the company announced the first of those changes in a post authored by Alec Oot, who’s been the principal group product manager for the Windows Insider Program since January 2024.

Those changes will streamline the Insider program, which has lost sight of its original goals in the past few years. (For a brief history of the program and what had gone wrong, see my post from last November: “The Windows Insider Program is a confusing mess.”)

Also: If Microsoft really wants to fix Windows 11, it should do these four things ASAP

If you’re currently participating in the Windows Insider Program, these are meaningful changes. Here’s what you can expect.

Simplifying the Insider channel lineup

Throughout the Windows 11 era, signing up for the Insider program has required choosing one of four channels using a dialog in Windows Settings. Here’s what those options look like today on one of my test PCs.

insider-program-channels-lineup-old

The current Insider channel lineup is confusing, to say the least.

Screenshot by Ed Bott/ZDNET

Which channel should you choose? As the company admitted in today’s post, “the channel structure became confusing. It was not clear what channel to pick based on what you wanted to get out of the program.”

The new lineup consists of two primary channels: Experimental and Beta. The Release Preview channel will still be available, primarily for the benefit of corporate customers who want early access to production builds a few days before their official release. That option will be available under the Advanced Options section.

windows-insider-channel-lineup-new

This simplified lineup is easier to follow. Beta is the upcoming retail release, Experimental is for the adventurous.

Screenshot courtesy of Microsoft

Here’s Microsoft’s official description of what’s in each channel now, with the company’s emphasis retained:

  • Experimental replaces what were previously the Dev and Canary channels. The name is deliberate: you’re getting early access to features under active development, with the understanding that what you see may change, get delayed, or not ship at all. We’ve heard your feedback that you want to access and contribute to features early in development and this is the channel to do that.
  • Beta is a refresh of the previous Beta Channel and previews what we plan to ship in the coming weeks. The big change: we’re ending gradual feature rollouts in Beta. When we announce a feature in a Beta update and you take that update, you will have that feature. You may occasionally see small differences within a feature as we test variations, but the feature itself will always be on your device.

These changes will apply to the Windows Insider Program for Business as well.

Offering a choice of platforms

For those testers who want to tinker with the bleeding edge of Windows development, a few additional options will be available in the Experimental channel. These advanced options will allow you to choose from a platform that’s aligned to a currently supported retail build. Currently, that’s Windows 11 version 25H2 or 26H1, with the latter being exclusively for new hardware arriving soon with Snapdragon X2 Arm chips.

Also: Microsoft account vs. local account: How to choose

There will also be a Future Platforms option, which represents a preview build that is not aligned to a retail version of Windows. According to today’s announcement, this option is “aimed at users who are looking to be at the forefront of platform development. Insiders looking for the earliest access to features should remain on a version aligned to a retail build.”

windows-insider-advanced-options-new

The Future Platforms option is the equivalent of the current Canary channel

Screenshot courtesy of Microsoft

Minimizing the chaos of Controlled Feature Rollout

Last month, I urged Microsoft to stop using its Controlled Feature Rollout technology, especially for builds in the Beta channel. Apparently, someone in Redmond was listening.

One of the most common questions we receive from Insiders is “why don’t I have access to a feature that’s been announced in a WIP blog?” This is usually due to a technology called Controlled Feature Rollout (CFR), a gradual process of rolling out new features to ensure quality before releasing to wider audiences. These gradual rollouts are an industry standard that help us measure impact before releasing more broadly. But they also make your experience unpredictable and often mean you don’t get the new features that motivated many of you to join the Insider program to begin with.

Moving forward, Insider builds in the Beta channel will no longer suffer from this gradual rollout of features. Meanwhile, the company says, “Insiders in the Experimental channel will have a new ability to enable or disable specific features via the new Feature Flags page on the Windows Insider Program settings page.”

windows-insider-feature-flags

Builds in the Experimental channel will include the option to turn new features on or off.

Screenshot courtesy of Microsoft

Not every feature will be available from this list, but the intent is to add those flags for “visible new features” that are announced as part of a new Insider build.

Making it easier to change channels

The final change announced today is one I didn’t see coming. Historically, leaving the Windows Insider Program or downgrading a channel (from Dev to Beta, for example) has required a full wipe and reinstall. That’s a major hurdle and a big impediment to anyone who doesn’t have the time or technical skills to do that sort of migration.

Also: Why Microsoft is forcing Windows 11 25H2 update on all eligible PCs

Beginning with the new channel lineup, it should be easier to change channels or leave the program without jumping through a bunch of hoops.

To make this a more streamlined and consistent experience, we’re making some behind the scenes changes to enable Insider builds to use an in-place upgrade (IPU) to hop between versions. This will allow in most cases Insiders to move between Experimental, Beta, and Release Preview on the same Windows core version, or leave the program without a clean install. An IPU takes a bit more time than your normal update but migrates your apps, settings, and data in-place.

If you’ve chosen one of the future platforms from the Experimental channel, those options don’t apply. To move back to a supported retail platform, you’ll need to do a clean install.

Also: Apple, Google, and Microsoft join Anthropic’s Project Glasswing to defend world’s most critical software

The upshot of all these changes should make things a lot clearer for anyone trying to figure out what’s coming in the next big feature update. Beta channel updates, for example, should offer a more accurate preview of what’s coming in the next big feature update, so over the next month or two we should get a better picture of what’s coming in the 26H2 release, due in October.

When can we start to see those changes rolling out to the general public? Stay tuned.





Source link