- Published on
How to effectively use Cursor as an Engineer to 10x your work
- Authors
- Name
- Talha Tahir
- linkedin @thetalhatahir
I'll be honest - I was skeptical about AI coding tools. Another hyped product that promises to make developers 10x more productive? Sure.
But after using Cursor for the past few months, I have to admit I was wrong. Not because it writes perfect code (it doesn't), but because it fundamentally changed how I approach development. Let me share what actually works and what doesn't.
If you're just getting started with AI tools, you might also find my guide on Using ChatGPT effectively as a Programmer helpful.
My Cursor workflow that actually works
After months of trial and error, here's what I've learned about making Cursor genuinely useful:
1) Give context, not just commands
The biggest mistake I made initially was treating Cursor like a fancy autocomplete. I'd ask it to "add a login form" and wonder why the result felt disconnected from my app.
Here's what works better - I give it the full picture:
Goal: Add unit tests for components under components/ and fix flaky behaviors.
Context: This is a Next.js app with TypeScript. We use Jest + React Testing Library.
Files to look at: components/*.tsx, app/*, any hooks in hooks/*
Constraints: No breaking changes; keep public props stable.
Deliverable: Test files + minimal refactors + how to run locally.
Then I say: "Show me the plan first, then we'll implement it."
This approach saved me from countless rewrites. Cursor understands what you're building, not just what you're asking for.
2) Small edits beat big rewrites
Early on, I'd let Cursor rewrite entire files. Bad idea. The code looked impressive but often broke subtle integrations I'd forgotten about.
Now I ask for specific changes: "Update the UserCard component to show the avatar on the left" instead of "rewrite this component." If the file is long, I make Cursor quote the exact lines it's changing.
This keeps me in control and makes reviews actually manageable.
3) Make it fail fast (then fix it)
This technique transformed how I use Cursor. Instead of trying to write perfect code upfront, I:
- Ask Cursor to write minimal tests first
- Run them locally and watch them fail
- Paste the failing output back into chat
- Let it iterate until everything passes
It's like pair programming with someone who never gets tired of fixing bugs. This approach catches integration issues early and gives me confidence the code actually works.
4) Use patterns that ship real code
I developed some prompting patterns that consistently produce usable results:
- "Make it production-ready": Forces it to include error handling, proper imports, and config
- "Keep it minimal": Prevents over-engineering and focuses on the core requirement
- "Name things clearly": Results in readable code I can maintain later
- "Handle edge cases": Adds proper validation and early returns
These patterns work well with the productivity principles I shared in How to boost productivity as a programmer.
5) Navigate codebases faster
One area where Cursor genuinely shines is understanding large codebases. I use it like a smart search:
- "Where does the blog post rendering happen?"
- "How do we handle RSS feed generation?"
- "Show me all the API routes for newsletter functionality"
It opens relevant files, explains the relationships, and then I can ask for specific changes. This beats grepping through code when you're working on unfamiliar parts of a system.
6) Automate the boring stuff (carefully)
Cursor is great at generating scripts for repetitive tasks - image optimization, updating configs, batch file operations. But I always review the diffs before running anything.
For longer scripts, I have it run in the background and stream logs so I can monitor what's happening. And I never let it handle secrets directly - everything goes through environment variables.
7) Know when to take back control
Here's the thing - Cursor isn't magic. When I'm dealing with complex architecture decisions or changes that span multiple files, I use it for scaffolding and let it handle the boring parts. But I write the core logic myself.
It's excellent for generating boilerplate, writing tests, and handling edge cases. It's not great at making nuanced design decisions or understanding business requirements.
Agent vs Chat mode: when to use what
Cursor has two modes, and I use them for different types of work:
Chat mode is my go-to for:
- Quick questions about code
- Getting explanations for complex logic
- Small refactors and code snippets
- Brainstorming approaches
I stay in control, copy what I need, and keep the scope small.
Agent mode is where things get interesting:
- Multi-step tasks that span several files
- Running commands, tests, and iterating on failures
- Changes I can validate with a build or test suite
The key with Agent mode is giving it tasks that have clear success criteria. "Make the tests pass" works great. "Improve the user experience" doesn't.
Real example: Agent mode for testing
Last week I needed tests for a Counter component. Here's how I approached it:
// components/Counter.tsx - the component I needed to test
import { useState } from 'react'
export function Counter({ initial = 0, step = 1 }: { initial?: number; step?: number }) {
const [count, setCount] = useState(initial)
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount((c) => c + step)}>Increment</button>
<button onClick={() => setCount((c) => c - step)}>Decrement</button>
<button onClick={() => setCount(initial)}>Reset</button>
</div>
)
}
Instead of asking for "unit tests," I gave Agent mode a clear brief:
Goal: Create comprehensive tests for components/Counter.tsx
Context: Next.js app using React Testing Library + Jest
Constraints: Test increment, decrement, reset, and custom props. Make tests reliable.
Deliverable: Test file + run tests to prove they work
What I got back was solid:
// __tests__/Counter.test.tsx
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { Counter } from '../components/Counter'
test('handles increment, decrement and reset with custom step', async () => {
render(<Counter initial={2} step={2} />)
const user = userEvent.setup()
await user.click(screen.getByText('Increment'))
expect(screen.getByText('Count: 4')).toBeInTheDocument()
await user.click(screen.getByText('Decrement'))
expect(screen.getByText('Count: 2')).toBeInTheDocument()
await user.click(screen.getByText('Reset'))
expect(screen.getByText('Count: 2')).toBeInTheDocument()
})
The best part? It ran the tests, found missing dependencies, added them, and confirmed everything passed. This saved me the usual back-and-forth of debugging test setup.
Chat mode for API endpoints
When I needed a quick feedback API endpoint, I used Chat mode since it was a self-contained task:
My prompt: "Create a POST endpoint at app/api/feedback/route.ts
that validates email and message using Zod, returns proper errors."
What I got:
// app/api/feedback/route.ts
import { NextResponse } from 'next/server'
import { z } from 'zod'
const FeedbackSchema = z.object({
email: z.string().email(),
message: z.string().min(5).max(2000),
})
export async function POST(req: Request) {
const json = await req.json().catch(() => null)
const result = FeedbackSchema.safeParse(json)
if (!result.success) {
return NextResponse.json(
{ error: 'Invalid payload', issues: result.error.flatten() },
{ status: 400 }
)
}
const { email, message } = result.data
// TODO: persist or forward to a service
return NextResponse.json({ ok: true, received: { email, message } }, { status: 200 })
}
Then I asked for a client utility:
// lib/feedback.ts
export async function sendFeedback(payload: { email: string; message: string }) {
const res = await fetch('/api/feedback', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
})
if (!res.ok) throw new Error('Feedback submission failed')
return res.json()
}
Perfect for isolated, single-purpose code. No need for Agent mode here.
What I learned about Cursor's limitations
Let me be honest about where Cursor falls short, because understanding these limitations is crucial:
It's not great at big picture decisions
When I was refactoring a component hierarchy to remove prop drilling, I thought Cursor would nail it. The task seemed straightforward - convert manual prop passing to React Context.
Before:
// Lots of prop drilling through Header -> UserBadge
export function UserBadge({ user }: { user: { name: string } }) {
return <span>{user.name}</span>
}
What I wanted:
// Clean context usage
export function UserBadge() {
const user = useUser()
return <span>{user.name}</span>
}
Cursor created the Context setup correctly, but it made some questionable architectural decisions about where to place the provider and how to handle error boundaries. I ended up keeping its implementation but rethinking the overall structure myself.
Lesson: Use Cursor for implementation details, but make the architectural decisions yourself.
It overdoes performance optimizations
I asked Cursor to optimize a slow list component, and it went wild with memoization:
// What Cursor gave me - probably overkill
import { memo, useMemo, useCallback } from 'react'
const List = memo(function List({ items, onSelect }: { items: string[]; onSelect: (v: string) => void }) {
const count = items.length
const upper = useMemo(() => items.map((i) => i.toUpperCase()), [items])
const handleClick = useCallback((i: string) => () => onSelect(i), [onSelect])
return (
<div>
<p>Total: {count}</p>
{upper.map((i) => (
<button key={i} onClick={handleClick(i)}>{i}</button>
))}
</div>
)
})
It technically works, but the memoization is probably unnecessary for most use cases. I had to ask it to explain when each optimization actually helps and when to remove them.
Lesson: Cursor defaults to "more is better" with optimizations. Question its choices.
My honest take
After months of daily use, here's what I think: Cursor won't replace good developers, but it makes good developers much more productive.
The key is treating it like a really fast junior developer. Great at implementation, needs guidance on architecture, and sometimes gets carried away with "best practices."
I'm shipping features faster, spending less time on boilerplate, and catching more bugs early. But I'm still making the important decisions about what to build and how to structure it.
If you're considering Cursor, start small. Use it for tests, refactoring, and generating utilities. Once you develop good prompting habits, gradually expand to bigger tasks.
And remember - the goal isn't to let AI write all your code. It's to spend more time on the problems that actually matter.
For more AI productivity tips, check out React 19: What it brings to the table or my guide on Using ChatGPT effectively as a Programmer.