Programming

Juan Pablo Balarini • 03 JUL 2025

Vibe Coding: How LLMs are reshaping the way we build software

post cover picture

A new trend is emerging in the world of software development, often referred to as vibe coding. This term, introduced by Andrej Karpathy (former Tesla AI lead), describes a workflow where you don’t code line by line—you guide.

Instead of writing every function from scratch, you describe the general idea or vibe of what you want. A Large Language Model (LLM) like GPT-4o, Claude, or Gemini handles the rest. You iterate, refine, and build with the model, not just using it as a tool but treating it like a collaborator.

Let’s explore how vibe coding works, how we use it at Eagerworks, and what it means for the future of software engineering.

What is vibe coding?

At its core, vibe coding means communicating intent rather than implementation. You give the LLM a broad goal or prompt, and it suggests the code. Through iterations—clarifying, rejecting, tweaking—you and the model refine the solution together.

Instead of writing:

class MessagesController < ApplicationController
 def create
   user = params[:user]
   ...
 end

 ...
end

…you might say:

“I want to create a messaging feature like in Slack, with public channels and no notifications.”

The model then generates the necessary files, components, and logic. It's coding by writing plain English, and it works because of the powerful LLMs behind the scenes.

How LLMs power Vibe Coding tools

Tools like Cursor are specifically designed for vibe coding. They integrate LLMs into your IDE and unlock workflows like:

Context-aware code generation

LLMs generate or modify code in-place without copy-pasting between tools like ChatGPT and your editor.

Agent-like capabilities

In “agent mode,” LLMs can run terminal commands, create/delete files, and apply suggested changes autonomously.

Custom project rules

You can define how you want to work via project config files (e.g., .cursor/rules), which help LLMs follow specific conventions and architectural styles from the start.

Step-by-step implementation planning

Complex ideas are broken into smaller tasks with detailed execution plans. Tools like Repo Prompt let you give the LLM full access to your codebase for deeper context.

Iterative refinement

You don’t expect perfection on the first try. The LLM learns from your feedback, improving outputs in real-time and streamlining debugging.

 

How we do Vibe Coding at Eagerworks

Here’s a real example of our current workflow for building a Slack-like messaging app from scratch:

1. Project setup: create a new project

Install all the necessary packages. If you are using a framework like Rails, you can do that with:

rails new messaging_app --css tailwind --database=postgresql
cd messaging_app

2. Bring your own .cursor/rules

You can define specific rules, for example, a rails.mdc that tells Cursor which versions and libraries to use when generating code, or more general rules, like what kind of project we are building and how you like to work. More on this below.

mkdir -p .cursor/rules && cp ../template_project/.cursor/rules/* .cursor/rules/

3. We use a docs/template folder that defines two files idea_request_prompt.md and planner_prompt.md

The first one allows the LLM to generate high-level features (e.g., notifications, file sharing, user management, real-time) from a high-level goal: “Build a Slack-like web app.” The second one takes the output from the first and builds a detailed step-by-step implementation plan.

mkdir -p docs/templates
cp ../template_project/docs/templates/idea_request_prompt.md docs/templates
cp ../template_project/docs/templates/planner_prompt.md docs/templates

4. Open a new chat in Cursor in Agent mode and include the idea request prompt with @idea_request_prompt.md.

Select the best available model that you can (currently Gemini or Claude). Then, paste the following prompt:

Use this as a template. The {{IDEA}} is a Slack-like web app. Help me come up with a feature list here.

5. Iterate the output as much as you like.

Be specific about what you want and what you don’t. For example:

I don't want:
- real-time functionality
- native apps or push notifications
- workspaces/organizations
- file sharing
- saved items
- integrations
- accessibility
- customization options
- no omniauth for now

Generate some pre-defined channels: #random, #general Prioritize having a mock layout on the first steps

Once you are confident with the output, ask Cursor to save it in docs/project_description.md. We now have a comprehensive document outlining the requirements and a set of high-level features we aim to develop.

6. We will now use project_description.md as input for the next step, which involves writing an implementation plan with detailed tasks and their dependencies.

For this step, is a good idea to be explicit and paste the entire codebase so far (should be small) as context. We recommend using Repo Prompt for this. Open a new chat, reference project_description with @project_description.md, and paste the following prompt:

Use @project_description.md as {{PROJECT_REQUEST}} to create a plan.
@rails.mdc contains the technical information of my project

This is my codebase so far:
<current_codebase>
PASTE CODEBASE HERE FROM REPO PROMPT
</current_codebase>

Save as project_steps.md

We will end with a file in docs/project_steps.md that outlines a step-by-step to-do list for building what we want.

(Bonus: you can define a .repo_ignore file to tell Repo Prompt which folders can be ignored)

7. We want to ensure Cursor is aware of what we are building, where our implementation file is located, and how we will proceed from now on.

For that, we create a general rule in .cursor/rules/general.mdc with the following:

- We are building a Slack-like messaging web app
- Find the steps in @project_steps.md
- Work on whatever steps(s) I tell you to work on and only work on those steps as once
- Once those steps are done, mark as completed
- Then stop, I will give you further instructions

We are now ready to start building our app! Go ahead and ask Cursor to start working on your features. Start a new chat and ask it to work on the first steps:

Work on steps 1-2

9. Cursor will implement steps 1 and 2 and mark them as completed in the project_steps.md file.

If it encounters any errors, it will attempt to correct them. If you notice anything that doesn't make sense or could be improved, just ask Cursor to update it.

That’s it! You can ask Cursor to keep implementing the remaining steps until the process is complete.

 

How LLMs work (and what they don’t do)

To vibe code effectively, it helps to understand how LLMs actually function.

What they are

LLMs are massive neural networks trained on huge datasets (think: large chunks of the internet). They're designed to predict the next word in a sequence. That’s it.

What they aren’t

LLMs don’t think or reason like humans. They don’t understand code in the traditional sense. They generate likely-sounding outputs based on statistical patterns. If a solution wasn’t in their training data, they might miss the mark.

Context limitations

LLMs can only consider a limited number of tokens (text pieces) at once. Most tools don’t send your whole codebase—just relevant parts via @file_reference. To overcome this, tools like Repo Prompt let you provide full-context snapshots.

 

Is Vibe Coding the future?

In many ways, yes.

Vibe coding:

Lowers the barrier to building prototypes
✅ Speeds up boilerplate-heavy workflows
Helps non-experts build functioning apps

But it also requires:

⚠️ Strong oversight to prevent low-quality or insecure code
⚠️ A deep understanding of prompting and code architecture
⚠️ Continuous iteration as LLMs and tools evolve

Vibe coding doesn’t eliminate developers—it elevates them into AI collaborators, architects, and product thinkers. By understanding the "how" and "why" behind these tools, developers can better leverage them while also recognizing the importance of human oversight and the foundational knowledge that still drives effective software creation.

It marks a major shift in software development, fueled by the rise of LLMs as creative coding partners. At Eagerworks, we’ve found it powerful, flexible, and—above all—fun.

The key? Stay curious. Stay in control. And never forget the human behind the prompt.

To dive deeper into AI-driven development, explore firsthand trends, tools, and workflows on our blog.

Stay updated!

project background