skip to content

From 60 to 25 Hours: Practical Tips for Effective AI-Assisted Development

/ 8 min read

TL;DR

  • Cut development time by 60% (from 60 to 25 hours) using AI tools effectively
  • Choose your tech stack yourself - don’t let AI make these decisions
  • Set up your project structure manually - AI tends to use outdated packages and weird folder structures
  • Break down tasks into small, specific requests for AI - don’t ask it to build entire features at once
  • Provide relevant documentation snippets to help AI understand framework-specific requirements
  • Keep context minimal - attach only relevant files, not entire codebases
  • Start fresh sessions for new features to avoid recurring patterns
  • Always refactor AI-generated code - it tends to be messy and needs cleanup
  • Try different AI models to find what works best for you (Claude, Gemini-2.5, etc.)

Bottom line: Use AI to handle tedious tasks while maintaining control over architecture and design decisions


I was doing one of the projects, I was using Copilot and I noticed that I finished this project way faster than I ever finished a similar project in the past. 25 vs 60 hours. AMAZING, no?! 🔥

I was like turbo-charged! I was able to do code as fast as my ADHD mind was thinking about different aspects of it: layout, styling, state management, comms with back-end API, DB schema and relationships, error handling…

All of this I was able to keep in my head AND be productive! Which is usually not the case, and I end up with a bunch of TODO comments and no code (at first).

And then refactoring all that multiple times using a different design pattern!

So this blog post is an instruction on how to make yourself productive with AI copilot/cursor/whatever, instead of fighting with LLM, swearing at it and rejecting all of its work.

You don’t need AI to select the stack

Maybe a little bit of DeepSearch.

When starting you should already know what tech stack and patterns you’re going to use.

The only reasons you should use AI at the start of the project if you wanna compare frameworks for your particular use case, e.g. zustand vs jotai.

In my case I knew I had to use Astro, because it was a front-end project with only some back-end required. It was perfect.

This was a “live app”, i.e. all users needed to be notified of the changes, so obviously I had to use WebSockets for that.

And then data was of predictable shape, had lots of relationships, so it meant SQL, and PostgreSQL in particular. LOVE it! I’m even subscribed to Postgres Weekly! 🐘

I also wanted Drizzle ORM to manage all the migrations and CRUD operations.

I knew that multiple components in React would need to interact with the same state, so I went with effector. It’s my favourite because of the sample , you can do some crazy logic with it, e.g.

sample({
// Watch for submitForm event, when it gets triggered ...
clock: submitForm,
// ... take the $userName store's state ...
source: $userName
// ... and transform the state (`name`) AND the parameters from the submitForm function (`password`)
// into parameters that signInFx would accept...
fn: (name, password) => ({ name, password }),
// ... and then call the function with signInFx({ name, password })
target: signInFx,
});

Isn’t that awesome?! AND it all lives outside of your UI logic. SO clean!

I could’ve also gone with nanostores, that Astro is promoting.

And, obviously, shadcn for UI components. Is there any other option?

Ok, I like HeroUI as well.

The rest feel raw or just weird.

To sum up:

Don’t let AI choose your tech stack for you.

Init the project

Don’t ask AI to do it. It’s just gonna do the shit job, use outdated packages, and setup weird-ass folder structure.

Set it up yourself. The way YOU like. The way YOU are used to work.

So I did. Just followed the Astro setup on shadcn basically

Terminal window
npx create-astro@latest astro-app --template with-tailwindcss --install --add react --git

I needed a dashboard. So I just took blocks from shadcn and adjusted items to just the ones I need.

shadcn-blocks.png

No AI. You’re gonna spend more time explaining AI what you want.

Same with things like Dockerfile . There are templates, you can just copy and tweak it. Do it yourself, it’s quicker.

Release the Kraken AI!

Once you have the setup ready, it’s time to outsource some work.

The only way I felt productive and content with what AI was giving me, is when I described step by step how I’d do it myself.

So for the form I went

Create a AddItem form React component using Tanstack Form.
It should have 3 inputs: customer (text, required), order number (text, required), vehicle (select field with Van and Truck as options, required)
It should have 2 buttons add (submit) and reset (nullify all the values in all the fields).
onSubmit create a function with empty body. We'll implement it later.

Keep it short

Notice how I didn’t let it go further. If I didn’t it might’ve went ahead and created some weird function, and an /api/form route, it would’ve made all decisions for me.

And I didn’t want it to. I wanted it to make it the way I think is right.

So that’s one of the key secrets to make AI productive.

Stop Copilot from going. Don’t ask it to do the entire project. Just ask it do a component.

Then an endpoint. Then a util function. Then ask it to put it all together. AI is like children. Asking it too many things at once, will make it ignore your important details that you gave it.

It’s never OpenAI

For programming, I noticed, I never use any models released by OpenAI.

It’s always Claude, either Sonnet 3.5 or 3.7 or, now, Gemini-2.5.

Gemini-2.5 is actually considered the best as of today.

Although GPT-4.1, they said, should be good at coding. And vibing…? Or is that GPT-4.5?

Anyway, switch some models up, see which ones vibe the best with you.

It’s all about vibes these days, isn’t it?

Help it see what you see

I’ve noticed that Cursor is doing web search. But Copilot isn’t yet. So the model will not know everything, especially if it’s new or rarely used.

Like Astro. All models are really shit at Astro. Try asking it create an action with a certain input and output. Any one of them is gonna do such a shit job, that you’d start doubting the rest of its abilities.

Insert snippets from Get Started or Example pages from the docs. This context would really help AI in generating the code that you have in your head and not a random shit from the internet.

Or you can setup MCP for yourself that would do the docs search for you, e.g.

I haven’t done that for myself yet tho.

Don’t overdo the context

You know how most (?) AI code editors let you attach files for context? This is an awesome feature!

They also let you attach code bases. This is dumb.

Remember, LLMs are like children: you give them too much, they’ll forget most of it, you’ll get garbage.

Attach only the relevant files for the ask.

Don’t let it see all of it. In fact…

Close the session once you done the feature/ask. Let it start over.

That way you don’t get it stuck in a recurring shit-generating pattern. Let it be a gold fish with a memory limited to a feature.

Refactor

Once the feature is done, you won’t be able to look at it, it’s gonna be heinous.

ABR - Always Be Refactoring.

The generated code will be all over the place, it’s gonna have weird if/else logic, triple and quadruple ternary operators, repeated code… Complete unreadable mess!

So you need to go through it with a fresh eye, move some functions out, change the logic… Refactor!

Notice your wins

If you don’t notice any significant wins when working with AI help, then you’re either a) been working with AI for ages now and used to it (not the case today, I’d say) or b) it didn’t work for you, fix it.

For me I noticed these wins:

  • I switched from React provider pattern (I forgot I was using Astro) to state management in “vanilla JS” way in under 2 mins;
  • I quickly scaffolded a gigantic switch statement for handling events for WebSockets on the server and associated actions it needs to take, e.g. call to DB, filter and map things out, handle errors, etc.;
  • I created effector stores and events and connected all of them in multiple samples in under 5 mins with minor refactors, a thing I was always dreading to do even considering my admiration for sample ;
  • I went from using react-use-websocket hook, to using socket.io after realising that islands architecture wouldn’t work with the hook, to native implementation of WebSockets, after learning that socket.io has a compatibility issue with my server;
  • I scaffolded (and later refactored into a decent code) forms with validation and styling with a UI and form libraries of my choice in 5 mins. And I hate doing forms. This is the best win of all.
  • I then refactored, cleaned and made all of my components, util functions, API endpoints, etc. small, testable, humanly readable, and DRY in less than 10 mins.

Most important win is, I didn’t lose myself, the coder, the programmer that likes to write code, create and make shit work.

I just outsourced the annoying and tedious bits that made my job soulless and energy draining.

That’s what AI is good for.