3 things I wish I knew before vibe coding my first app with GitHub Copilot


I’ve tried vibe coding for fun mostly. But I realized that when done right, it can make some quality software products, especially if you’re a vibe coder with programming knowledge. That’s why I’ve been testing different techniques and found some that work really well.

Plan first, then implement

The planning phase is where you can discuss the requirements

A robot dictates computer code to a human being seated at a computer. Credit: Sydney Louw Butler / How-To Geek / Midjourney

One of the most common mistakes I used to make when vibe coding was to jump right into the implementation from the beginning. In other words, start coding the project right away. The idea was to get it done as quickly as I could. The result? A half-baked product with broken features. That’s because not creating a plan before caused quite a few issues in the long run.

First, I didn’t know how the AI would approach a particular feature implementation. It could go the wrong way. For example, using N+1 queries instead of SQL JOIN (yes, I’ve faced this quite a lot.) In the planning phase, you could tell the model up front the most efficient way to handle these.

Second, I can discuss any changes you want in the planning phase. After reading the plan on how the AI will tackle the request, I can ask it to change anything before implementing it. This saves quite some time because changing something after it has already written the code is quite a hassle.

Thirdly, and this might be the most important, is that, in my experience, if you ask the AI to give a detailed plan for a feature, and then ask it to follow that thorough plan to write the code, the output is significantly better. This might be because the AI has something to follow and guide its decisions. On the other hand, going to direct implementation keeps things a bit blurry.

GitHub Copilot has a Plan mode for this. But in my case, I mostly use the Ask mode for planning and refining that plan.

Person typing on the OnePlus Keyboard 81 Pro with rainbow backlighting.


Why I’m learning to code in the age of vibe coding

I’m not giving in to the vibes yet.

Choose your models depending on what you want to do

Best models also cost the most tokens

ChatGPT, Claude, and Gemini logos over a code editor with some blurred code written. Credit: Zunaid Ali / How-To Geek

You may feel tempted to always use heavy, thinking models such as Claude Opus. But remember that heavy models will also drain your tokens faster than you can even finish writing the next prompt. That’s why I’ve grouped different AI models depending on the task at hand.

For simple discussions and questions, I use something like Claude Haiku. For planning and architecting, Claude Opus does a great job, just like a senior engineer. For implementing small to medium features, Claude Sonnet or OpenAI Codex is enough. For UI work, I sometimes use Gemini.

Through much trial and error, I’ve found the best use cases for each model. This allows me to save and plan my token usage so that I don’t overuse any for small tasks and don’t end up using a less powerful model for a heavy task.

The good thing about GitHub Copilot is that it has an Auto mode. By selecting it, you let the IDE decide which model to use based on the nature of your request. For heavy tasks, it will automatically switch to a more powerful model.

Close-up of the Claude Code welcome screen on an iPad connected to a Mac.


The vibe coding bubble is going to leave a lot of broken apps behind

Weekend app challenges will leave the App Store full of abandonware.

Add images, documents, and other supplementary resources

One image can literally describe a feature without writing anything

Illustration of a computer monitor showing a JavaScript symbol

Writing detailed prompts is fine. However, sometimes, it’s difficult to explain all the nitty-gritty details of the feature you’d like to add to your app. This is especially true for vibe coding the UI and the frontend. That’s where adding attachments, such as images, comes in handy.

If you have a Figma or UI design for your product, you can simply take a screenshot and add it with your prompt. In case you don’t have a design, you can use an existing product’s screenshot as a baseline and adjust your prompt to tell the AI model what will differ. For example, if you want to vibe code a productivity app, you can check out other productivity apps for inspiration.

Images aren’t the only things you can add. If you have documentation as PDF files or web links to documentation, you can add them as well. This is useful when you start a new project or create a new chat window. Repeating the same explanation is cumbersome. Having an explanation or requirement document allows you to attach it to the initial prompts so that the AI gets a rough idea of your project.

ChatGPT logo with some circuits on the sides and a man programming on a desktop in the background with some codes on the left side.


I Tried Using Vibe Coding to Create My Own Productivity App

Is it possible to create a working app without writing a single line of code?

One thing I do often is ask the AI to create documentation files for every feature I implement. In the long run, this helps quite a lot to keep track of what’s done, how it works, and what’s left. The AI also updates its context and knowledge accordingly.

You do have to keep your token management in mind, though, since adding large files can cost a lot of tokens. Do some trial and error to find the sweet spot between attaching files and writing detailed prompts so you can get the best of both.


A little change in strategy can make a big difference

The latest AI tools, such as Claude and Gemini, have made software development extremely accessible to non-coders. With a few changes in how you give prompts and instructions, you can turn toy projects into more real-world apps.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link