200

Ask HN: Do you have any evidence that agentic coding works?

I've been trying to get agentic coding to work, but the dissonance between what I'm seeing online and what I'm able to achieve is doing my head in.

Is there real evidence, beyond hype, that agentic coding produces net-positive results? If any of you have actually got it to work, could you share (in detail) how you did it?

By "getting it to work" I mean: * creating more value than technical debt, and * producing code that’s structurally sound enough for someone responsible for the architecture to sign off on.

Lately I’ve seen a push toward minimal or nonexistent code review, with the claim that we should move from “validating architecture” to “validating behavior.” In practice, this seems to mean: don’t look at the code; if tests and CI pass, ship it. I can’t see how this holds up long-term. My expectation is that you end up with "spaghetti" code that works on the happy path but accumulates subtle, hard-to-debug failures over time.

When I tried using Codex on my existing codebases, with or without guardrails, half of my time went into fixing the subtle mistakes it made or the duplication it introduced.

Last weekend I tried building an iOS app for pet feeding reminders from scratch. I instructed Codex to research and propose an architectural blueprint for SwiftUI first. Then, I worked with it to write a spec describing what should be implemented and how.

The first implementation pass was surprisingly good, although it had a number of bugs. Things went downhill fast, however. I spent the rest of my weekend getting Codex to make things work, fix bugs without introducing new ones, and research best practices instead of making stuff up. Although I made it record new guidelines and guardrails as I found them, things didn't improve. In the end I just gave up.

I personally can't accept shipping unreviewed code. It feels wrong. The product has to work, but the code must also be high-quality.

Bear in mind that there is a lot of money riding on LLMs leading to cost savings, and development (seen as expensive and a common bottleneck) is a huge opportunity. There are paid (micro) influencer campaigns going on and what not.

Also bear in mind that a lot of folks want to be seen as being on the bleeding edge, including famous people. They get money from people booking them for courses and consulting, buying their books, products and stuff. A "personal brand" can have a lot of value. They can't be seen as obsolete. They're likely to talk about what could or will be, more than about what currently is. Money isn't always the motive for sure, people also want to be considered useful, they want to genuinely play around and try and see where things are going.

All that said, I think your approach is fine. If you don't inspect what the agent is doing, you're down to faith. Is it the fastest way to get _something_ working? Probably not. Is it the best way to build an understanding of the capabilities and pit falls? I'd say so.

This stuff is relatively new, I don't think anyone has truly figured out how to best approach LLM assisted development yet. A lot of folks are on it, usually not exactly following the scientific method. We'll get evidence eventually.

23 minutes agofhd2

I've been programming for 20 years, and I've always been under-estimating how long things will take (no, not pressured by anyone to give firm estimates, just talking about informally when prioritizing work order together).

The other day I gave an estimate to my co-worker and he said "but how long is it really going to take, because you always finish a lot quicker than you say, you say two weeks and then it takes two days".

The LLMs will just make me finish things a lot faster and my gut feel estimation for how long things will take still is not yet taking that into account.

(And before people talk about typing speed: No that isn't it at all. I've always been the fastest typer and fastest human developer among my close co-workers.)

Yes, I need to review the code and interact with the agent. But it's doing a lot better than a lot of developers I've worked with over the years, and if I don't like the style of the code it takes very few words and the LLM will "get it" and it will improve it..

Some commenters are comparing the LLM to a junior. In some sense that is right in that the work relationship may be the same as towards a (blazingly fast) junior; but the communication style and knowledge area and how few words I can use to describe something feels more like talking to a senior.

(I think it may help that latest 10 years of my career a big part of my job was reviewing other people's code, delegating tasks, being the one who knew the code base best and helping others into it. So that means I'm used to delegating not just coding. Recently I switched jobs and am now coding alone with AI.)

16 minutes agodagss

I think one fatal flaw is letting the agent build the app from scratch. I've had huge success with agents, but only on existing apps that were architected by humans and have established conventions and guardrails. Agents are really bad at architecture, but quite good at following suit.

Other things that seem to contribute to success with agents are:

- Static type systems (not tacked-on like Typescript)

- A test suite where the tests cover large swaths of code (i.e. not just unit testing individual functions; you want e2e-style tests, but not the flaky browser kind)

With all the above boxes ticked, I can get away with only doing "sampled" reviews. I.e. I don't review every single change, but I do review some of them. And if I find anything weird that I had missed from a previous change, I to tell it to fix it and give the fix a full review. For architectural changes, I plan the change myself, start working on it, then tell the agent to finish.

an hour agoresonious

A principal engineer at Google posted on Twitter that Claude Code did in an hour what the team couldn’t do in a year.

Two days later, after people freaked out, context was added. The team built multiple versions in that year, each had its trade offs. All that context was given to the AI and it was able to produce a “toy” version. I can only assume it had similar trade offs.

https://xcancel.com/rakyll/status/2007659740126761033#m

My experience has been similar to yours, and I think a lot of the hype is from people like this Google engineer who play into the hype and leave out the context. This sets expectations way out of line from reality and leads to frustration and disappointment.

7 hours agoal_borland

> A principal engineer at Google posted on Twitter that Claude Code did in an hour what the team couldn’t do in a year.

I’ll bring the tar if you bring the feathers.

That sounds hyperbolic but how can someone say something so outrageoulsy false.

39 minutes agokeybored

as someone who worked at the company, i understood the meaning behind the tweet without the additional clarification. i think she assumed too much shared context when making the tweet

8 minutes agothornewolf

A principal engineer at Google made a public post on the World Wide Web and assumed some shared Google/Claude-context. Do you hear yourself?

a minute agokeybored

May I ask about your level of experience and which AI you tried to use? I have a strong suspicion these two factors are rarely mentioned, which leads to miscommunication. For example, in my experience, up until recently you could get amazing results, but only if you had let's say 5+ years of experience AND were willing to pay at least $100/month for Claude Code AND followed some fairly trivial usage policies (e.g., using the "ultrathink" keyword, planning mode etc) AND didn't feel lazy actually reading the output. Quite often people wouldn't meet one of those criteria and would call out the AI hype bubble.

an hour agodysleixc

This discussion is a request for positive examples to demonstrate any of the recent grandiose claims about ai assisted development. Attempting to switch instead to attacking the credentials of posters only seems to supply evidence that there are no positive examples, only hype. It doesn't seem to add to the conversation.

12 minutes agoamoss

From the very beginning everyone tells us “you are using the wrong model”. Fast forward a year, the free models become as good as last year premium models and the result is still bad but you still hear the same message “you are not using the last model”… I just stopped caring to try the new shiny model each month and simply reevaluate the state of the art once a year for my sanity. Or maybe my expectation is clearly too high for these tools.

36 minutes agococoto

> would call out the AI hype bubble

Which is what it is by describing it as a tool needing thousands of dollars and years of time in learning fees while being described as "replaces devs" in an instant. It is a tool and when used sparingly by well trained people, works. To the extend that any large statistical text predictor would.

36 minutes agoconsp

Yeah that was bullshit (like most AI related crap... lies, damn lies, statistics, ai benchmarks). Like saying my 5 year old said words that would solve the Greenland issue in an hour. But words not put to test lol, just put on a screen and everyone say woah!!! AI can't ship. That stil needs humans.

an hour agohahahahhaah

Humans regularly design entire Uber, google, youtube, twitter, whatsapp etc in 45 mins in system design interviews. So AI designing some toy version is meh.

3 hours agosrcport56445

You're choosing to focus on specific hype posts (which were actually just misunderstandings of the original confusingly-worded Twitter post).

While ignoring the many, many cases of well-known and talented developers who give more context and say that agentic coding does give them a significant speedup (like Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison).

3 hours agoedanm

Why not in that case provide an example to rebut and contribute as opposed to knocking someone elses example even if it was against the use of agentic coding.

2 hours agoNoPicklez

Serious question - what kind of example would help at this point?

Here are a sample of (IMO) extremely talented and well known developers who have expressed that agentic coding helps them: Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison. This is just randomly off the top of my head, you can find many more. None of them claim that agentic coding does a years' worth of work for them in an hour, of course.

In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up, most of them significant. The "AI doesn't help me" crowd is, as far as I can tell, an online-only phenomenon. In real life, everyone has used it to at least some degree and finds it very valuable.

an hour agoedanm

Nit: s/Reddit/Redis/

Though it is fun to imagine using Reddit as a key-value store :)

21 minutes agoakoboldfrying

Citation needed. Talk, especially in the 'agentic age', is cheap.

2 hours agoNBJack

I use Augment with Claud Opus 4.5 every day at my job. I barely ever write code by hand anymore. I don't blindly accept the code that it writes, I iterate with it. We review code at my work. I have absolutely found a lot of benefit from my tools.

I've implemented several medium-scale projects that I anticipate would have taken 1-2 weeks manually, and took a day or so using agentic tools.

A few very concrete advantages I've found:

* I can spin up several agents in parallel and cycle between them. Reviewing the output of one while the others crank away.

* It's greatly improved my ability in languages I'm not expert in. For example, I wrote a Chrome extension which I've maintained for a decade or so. I'm quite weak in Javascript. I pointed Antigravity at it and gave it a very open-ended prompt (basically, "improve this extension") and in about five minutes in vastly improved the quality of the extension (better UI, performance, removed dependencies). The improvements may have been easy for someone expert in JS, but I'm not.

Here's the approach I follow that works pretty well:

1. Tell the agent your spec, as clearly as possible. Tell the agent to analyze the code and make a plan based on your spec. Tell the agent to not make any changes without consulting you.

2. Iterate on the plan with the agent until you think it's a good idea.

3. Have the agent implement your plan step by step. Tell the agent to pause and get your input between each step.

4. Between each step, look at what the agent did and tell it to make any corrections or modifications to the plan you notice. (I find that it helps to remind them what the overall plan is because sometimes they forget...).

5. Once the code is completed (or even between each step), I like to run a code-cleanup subagent that maintains the logic but improves style (factors out magic constants, helper functions, etc.)

This works quite well for me. Since these are text-based interfaces, I find that clarity of prose makes a big difference. Being very careful and explicit about the spec you provide to the agent is crucial.

6 hours agodefatigable

> I've implemented several medium-scale projects that I anticipate would have taken 1-2 weeks manually

A 1-week project is a medium-scale project?! That's tiny, dude. A medium project for me is like 3 months of 12h days.

an hour agojesse__

You are welcome to use whatever definition of "small/medium/large" you like. Like you, 1-2 weeks is also far from the largest project I've worked on. I don't think that's particularly relevant to the point of my post.

The point that I'm trying to emphasize is that I've had success with it on projects of some scale, where you are implementing (e.g.) multiple related PRs in different services. I'm not just using it on very tightly scoped tasks like "implement this function".

an hour agodefatigable

> I'm quite weak in Javascript.

> I use Augment with Claud Opus 4.5 every day at my job.

Your story checks out.

2 hours agosolaris2007

That's a very good point.

The OP is "quite weak at JavaScript" but their AI "vastly improved the quality of the extension." Like, my dude, how can you tell? Does the code look polished, it looks smart, the tests pass, or what?! How can you come forward and be the judge of something you're not an expert in?

I mean, at this point, I'm beginning to be skeptical about half the content posted online. Anybody can come up with any damn story and make it credible. Just the other day I found out about reddit engagement bots, and I've seen some in the wild myself.

I'm waiting for the internet bubble to burst already so we can all go back to our normal lives, where we've left it 20 years or so ago.

an hour agomolteanu

How can I tell? Yes, the code looks quite a bit more polished. I'm not expert enough in JS to, e.g., know the cleanest method to inspect and modify the DOM, but I can look at code that does and tell if the approach it's using is sensible or not. Surely you've had the experience of a domain where you can evaluate the quality of the end product, even if you can't create a high quality product on your own?

Concretely in this case, I'd implemented an approach that used jQuery listeners to listen for DOM updates. Antigravity rewrote it to an approach that avoided the jQuery dependency entirely, using native MutationObservers. The code is sensible. It's noticeably more performant than the approach I crafted by hand. Antigravity allowed me to easily add a number of new features to my extension that I would have found tricky to add by hand. The UI looks quite a bit nicer than before I used AI tools to update it. Would these enhancements have been hard for an expert in Chrome extensions to implement? Probably not. But I'm not that expert, and AI coding tools allowed me to do them.

That was not actually the main thrust of my post, it's just a nice side benefit I've experienced. In the main domain where I use coding tools, at work, I work in languages where I'm quite a bit more proficient (Golang/Python). There, the quality of code that the AI tools generate is not better than I write by hand. The initial revisions are generally worse. But they're quite a bit faster than I write by hand, and if I iterate with the coding tools I can get to implementations that are as good as I would write by hand, and a lot faster.

I understand the bias towards skepticism. I have no particular dog in this fight, it doesn't bother me if you don't use these tools. But OP asked for peoples' experiences so I thought I'd share.

an hour agodefatigable

JavaScript isn't the only programming language around. I'm not the strongest around with JS either but I can figure it out as necessary -- knowing C/C++/Java/whatever means you can still grok "this looks better than that" for most cases.

an hour agoachierius

Yep. I have plenty of experience in languages that use C-style syntax, enough to easily understand code written in other languages that occur nearby in the syntactical family tree. I'm not steeped in JS enough to know the weird gotchas of the type system, or know the standard library well, etc. But I can read the code fine.

If I'd asked an AI coding tool to write something up for me in Haskell, I would have no idea if it had done a good job.

an hour agodefatigable

I've never had a job where writing Javascript has been the primary language (so far it's been C++/Java/Golang). The JS Chrome Extension is a fun side project. Using Augment in a work context, I'm primarily using it for Golang and Python code, languages where I'm pretty proficient but AI tools give me a decent efficiency boost.

I understand the emotional satisfaction of letting loose an easy snarky comment, of course, but you missed the mark I'm afraid.

an hour agodefatigable

Great advice.

> Tell the agent your spec, as clearly as possible.

I have recently added a step before that when beginning a project with Claude Code: invoke the AskUserQuestionTool and have it ask me questions about what I want to do and what approaches I prefer. It helps to clarify my thinking, and the specs it then produces are much better than if I had written them myself.

I should note, though, that I am a pure vibe coder. I don't understand any programming language well enough to identify problems in code by looking at it. When I want to check whether working code produced by Claude might still contain bugs, I have Gemini and Codex check it as well. They always find problems, which I then ask Claude to fix.

None of what I produce this way is mission-critical or for commercial use. My current hobby project, still in progress, is a Japanese-English dictionary:

https://github.com/tkgally/je-dict-1

https://www.tkgje.jp/

5 hours agotkgally

Great idea! That's actually the very next improvement I was planning on making to my coding flow: building a sub agent that is purely designed to study the codebase and create a structured implementation plan. Every large project I work on has the same basic initial steps (study the codebase, discuss the plan with me, etc) so it makes sense to formalize this in an agent I specialize for the purpose.

an hour agodefatigable

[dead]

3 hours agoTechDebtDevin

I've found it useful for getting features started and fixing bugs, but it depends on the feature. I use Claude Sonnet 4.5 and it usually does a pretty good job on well-known problems like setting up web sockets and drag and drop UIs which would take me much longer to do by hand. It also seems to follow examples well of existing patterns in my codebase like router/service/repository implementations. I've struggled to get it to work well for messy complicated problems like parsing text into structured objects that have thousands of edge cases and in which the complexity gets out of hand very quickly if not careful. In these cases I write almost all the code by hand. I also use it for writing ad-hoc scripts I need to run once and are not safety critical, in which case I use it's code as-is after a cursory review that it is correct. Sometimes I build features I would otherwise be too intimidated to try if doing by hand. I also use it to write tests, but I usually don't like it's style and tend to simplify them a lot. I'm sure my usage will change over time as I refine what works and what doesn't for me.

37 minutes agomvanzoest

You fundamentally misunderstand AI assisted coding if you think it does the work for you, or that it gets it right, or that it can be trusted to complete a job.

It is an assistant not a team mate.

If you think that getting it wrong, or bugs, or misunderstandings, or lost code, or misdirections, are AI "failing", then yes you will fail to understand or see the value.

The point is that a good AI assisted developer steers through these things and has the skill to make great software from the chaotic ingredients that AI brings to the table.

And this is why articles like this one "just don't get it", because they are expecting the AI to do their job for them and holding it to the standards of a team mate. It does not work that way.

6 hours agowewewedxfgdf

That’s not what I meant. What I’m asking is whether there’s any evidence that the latest “techniques” (such as Ralph) can actually lead to high quality results both in terms of code and end product, and if so, how.

2 hours agoterabytest

I don't understand what kind of evidence you expect to receive.

There are plenty of examples from talented individuals, like Antirez or Simonw, and an ocean of examples from random individuals online.

I can say to you that some tasks that would take me a day to complete are done in 2h of agentic coding and 1h of code review, with the additional feature that during the 2h of agenti coding I can do something else. Is this the kind of evidence you are looking for?

an hour agogbalduzzi

"You're holding it wrong"

an hour agoxg15

[dead]

3 hours agoTechDebtDevin

"I bought a subscription to Claude and it didn't write a perfectly coded application for me while I watched a game of baseball. AI sucks!"

3 hours agowewewedxfgdf

Given the claims that AI is replacing jobs left and right, that there’s no more need for software developers or computer science education, then it had jolly well better be able to code a perfect application while I watch baseball.

3 hours agotjr

As long as it makes already senior engineers work as quickly alone as when working in a team together with 3 juniors, it can lead to replacing jobs without producing code that doesn't need review.

an hour agodagss

The only approach I've tried that seems to work reasonably well, and consistently, was the following:

Make a commit.

Give Claude a task that's not particularly open ended, the closer to pure "monkey work" boilerplate nonsense the task is, the better (which is also the sort of code I don't want do deal with myself).

Preferably it should be something that only touches a file or two in the codebase unless it is a trivial refactor (like changing the same method call all over the place)

Make sure it is set to planning mode and let it come up with a plan.

Review the plan.

Let it implement the plan.

If it works, great, move on to review. I've seen it one-shot some pretty annoying tasks like porting code from one platform to another.

If there are obvious mistakes (program doesn't build, tests don't pass, etc.) then a few more iterations usually fix the issue.

If there are subtle mistakes, make a branch and have it try again. If it fails, then this is beyond what it can do, abort the branch and solve the issue myself.

Review and cleanup the code it wrote, it's usually a lot messier than it needs to be. This also allows me to take ownership of the code. I now know what it does and how it works.

I don't bother giving it guidelines or guardrails or anything of the sort, it can't follow them reliably. Even something as simple as "This project uses CMake, build it like this" was repeatedly ignored as it kept trying to invoke the makefile directly and in the wrong folder.

This doesn't save me all that much time since the review and cleanup can take long, but it serves a great unblocker.

I also use it as a rubber duck that can talk back and documentation source. It's pretty good for that.

This idea of having an army of agents all working together on the codebase is hilarious to me. Replace "agents" with "juniors I hired on fiverr with anterograde amnesia" and it's about how well it goes.

17 hours agosirwhinesalot

+1 for the Rubber duck, and as an unblocker.

My personal use is very much one function at a time. I know what I need something to do, so I get it to write the function which I then piece together.

It can even come back with alternatives I may not have considered.

I might give it some context, but I'm mainly offloading a bunch of typing. I usually debug and fix it's code myself rather than trying to get it to do better.

3 hours agodwd

TBH I think the greatest benefit is on the documentation/analysis side. The "write the code" part is fine when it sits in the envelope of things that are 100% conventional boilerplate. Like, as a frontend to ffmpeg you can get a ton of value out of LLMs. As soon as things go open-ended and design-centric, brace yourself.

I get the sense that the application of armies of agents is actually a scaled-up Lisp curse - Gas Town's entire premise is coding wizardry, the emphasis on abstract goals and values, complete with cute, impenetrable naming schemes. There's some corollary with "programs are for humans to read and computers to incidentally execute" here. Ultimately the program has to be a person addressing another person, or nature, and as such it has to evolve within the whole.

13 hours agocrq-yml

That's the way.

16 hours agolaylower

I have the same experience despite using claude every day. As an funny anecdote:

Someone I know wrote the code and the unit tests for a new feature with an agent. The code was subtly wrong, fine, it happens, but worse the 30 or so tests they added added 10 minutes to the test run time and they all essentially amounted to `expect(true).to.be(true)` because the LLM had worked around the code not working in the tests

17 hours agoedude03

There was an article on HN last week (?) which described this exact behaviour in the newer models.

Older, less "capable", models would fail to accomplish a task. Newer models would cheat, and provide a worthless but apparently functional solution.

Hopefully someone with a larger context window than myself can recall the article in question.

17 hours agomonooso

I think that article was basically wrong. They asked the agent not to provide any commentary, then gave an unsolvable task, and wanted the agent to state that the task was impossible. So they were basically testing which instructions the agent would refuse to follow.

Purely anecdotally, I've found agents have gotten much better at asking clarifying questions, stating that two requirements are incompatible and asking which one to change, and so on.

https://spectrum.ieee.org/ai-coding-degrades

16 hours agoSatvikBeri

From my experience: TDD helps here - write (or have AI write) tests first, review them as the spec, then let it implement.

But when I use Claude code, I also supervise it somewhat closely. I don't let it go wild, and if it starts to make changes to existing tests it better have a damn good reason or it gets the hose again.

The failure mode here is letting the AI manage both the implementation and the testing. May as well ask high schoolers to grade their own exams. Everyone got an A+, how surprising!

16 hours agosReinwald

> TDD helps here - write (or have AI write) tests first, review them as the spec

I agree, although I think the problem usually comes in writing the spec in the first place. If you can write detailed enough specs the agent will usually give you exactly what you asked for. If you're spec is vague, it's hard to eyeball if the tests or even the implementation of the tests matches what you're looking for.

15 hours agoedude03

This happens with me every time I try to get claude to write tests. I've given up on it. Instead I will write the tests if I really care enough to have tests.

16 hours agojermaustin1

> they all essentially amounted to `expect(true).to.be(true)` because the LLM had worked around the code not working in the tests

A very human solution

17 hours agoantonvs

I wonder if Volkswagen would've blamed AI if they got caught with Dieselgate nowadays...

In PR-lese: "To improve quality and reduce costs, we used AI to program some test code. Unfortunately the test code the AI generated fell below our standards, and it was missed during QA.".

Then again they got their supplier Bosch to program the "defeat device" and lied to them that "Oh don't worry, it's just for testing, we won't deploy it to production". (The "device" (probably just an algorithm) detects whether the steering wheel was being moved or not as the throttle is pushed, and if not, it assumes the car was undergoing emissions testing, and it runs the engine in the environmentally friendlier mode).

8 hours agonetsharc

Learning how to drive the models is a legit skill - and I don't mean "prompt engineering". There are absolutely techniques that help and because things are moving fast there is little established practice to draw from. But it's also been interesting seeing experienced coders struggle - I've found my time as a manager has been more help to me than my time as a coder. How to keep people on task and focused etc is very similar to managing humans. I suspect much of the next 5 years will be people rediscovering existing human and project management techniques and rebranding them as AI something.

Some techniques I've found useful recently:

- If the agent struggled on something once it's done I'll ask it "you were struggling here, think about what happened and if there are is anything you learned. Put this into a learnings document and reference it in agents.md so we don't get stuck next time"

- Plans are a must. Chat to the agent back and forth to build up a common understanding of the problem you want solved. Make sure to say "ask me any follow up questions you think are necessary". This chat is often the longest part of the project - don't skimp on it. You are building the requirements and if you've ever done any dev work you understand how important having good requirements are to the success of the work. Then ask the model to write up the plan into an implementation document with steps. Review this thoroughly. Then use a new agent to start work on it. "Implement steps 1-2 of this doc". Having the work broken down into steps helps to be able to do work more pieces (new context windows). This part is the more mindless part and where you get to catch up on reading HN :)

- The GitHub Copilot chat agent is great. I don't get the TUI folks at all. The Pro+ plan is a reasonable price and can do a lot with it (Sonnet, Codex, etc all available). Being able to see the diffs as it works is helpful (but not necessary) to catch problems earlier.

8 hours agoeverfrustrated

+1 for generating plans and then clearing context. I typically have a skill and an agent. I use the skill to generate an initial plan for an atomic unit of work, clear context and then use the agent to review said plan. Finally clear context and use the skill to implement the plan phase by phase, ensuring to review each phase for consistency with the next phase and the overall plan. I've had moderate success with this.

6 hours agomarwamc

Another important thing to do is to instruct the agent to keep a <plan-name>-NOTES.md file where it tracks its progress and keeps implementation notes. The notes are usually short with Opus 4.5 but very helpful, especially when you need to reset mid-phase and restart it with a fresh context.

If you keep the notes around in repo, you can instruct future plan writers to review implementation notes from relevant plans to keep continuity.

3 hours agothrowup238

I am with you on this, although I was able to ship with Aider before as it uses less autonomous approach than the current wave of agentic tools.

I don't even care about abstract code quality. To me code quality means maintainability. If the agents are able to maintain the mess they are spewing out, that's quality code to me. We are decidedly not there yet though.

5 minutes agoentropyneur

I used Claude Opus 4.5 inside Cursor to write RISC-V Vector/SIMD code. Specifically Depthwise Convolution and normal Convolution layers for a CNN.

I started out by letting it write a naive C version without intrinsic, and validated it against the PyTorch version.

Then I asked it (and two other models, Gemini 3.0 and GPT 5.1) to come up with some ideas on how to make it faster using SIMD vector instructions and write those down as markdown files.

Finally, I started the agent loop by giving Cursor those three markdown files, the naive C code and some more information on how to compile the code, and also an SSH command where it can upload the program and test it.

It then tested a few different variants, ran it on the target (RISC-V SBC, OrangePI RV2) to check if it improves runtime, and then continue from there. It did this 10 times, until it arrived at the final version.

The final code is very readable, and faster than any other library or compiler that I have found so far. I think the clear guardrails (output has to match exactly the reference output from PyTorch, performance must be better than before) makes this work very well.

17 hours agofotcorn

I am really surprised by this. While I know it can generate correct SIMD code, getting a performant version is non trivial, especially for RVV, where the instruction choices and the underlying micro architecture would significantly impact the performance.

IIRC, Depthwise is memory bound so the bar might be lower. Perhaps you can try some thing with higher compute intensity like a matrix multiply. I have observed, it trips up with the columnar accesses for SIMD.

16 hours agosifar

can you share the code?

16 hours agocamel-cdr

My colleague coded a feature with Code Claude in a day. The code looks good, also seemingly works. The code was reviewed and pushed out to production.

The problem: there is no way, he verified the code in any way. The business logic behind the feature would take probably few days to check for correctness. But if it looks good -> done. Let the customer check it. Of course, he claims “he reviewed it”.

It feels to me, we just skip doing half the things proper senior devs did, and claim we’re faster.

an hour agomdavid626

For me, the only metric that matters is wall-time between initial idea and when it's solid enough that you don't have to think about it.

Agentic coding is very similar to frameworks in this regard:

1. If the alignment is right, you have saved time.

2. If it's not right, it might take longer.

3. You won't have clear evidence of which of these cases applies until changing course becomes too expensive.

4. Except, in some cases, this doesn't apply and it's obvious

an hour agokristopolous

I have started to use it to write small throwaway things. Like write a standalone debug shader that can display all this state on top of this image in real time. Not in a million years would I had spent time to mess with fonts in a shading language or bring in immediate gui framework or such. Codex could oneshot that kind of thing and the blast radius is one file that is not part of the project. Or write a separate python program that implements this core logic and double check my thinking. I am not a professional programmer though.

an hour agoplastic3169

Hang in there. Yes it is possible; I do it every day. I also do iOS and my current setup is: Cursor + Claude Opus 4.5.

You still need to think about how you would solve the problem as an engineer and break down the task into a right-sized chunk of work. i.e. If 4 things need to change, start with the most fundamental change which has no other dependencies.

Also it is important to manage the context window. For a new task, start a new "chat" (new agent). Stay on topic. You'll be limited to about five back-and-forths before performance starts to suffer. (cursor shows a visual indicator of this in the for of the circle/wheel icon)

For larger tasks, tap the Plan button first, and guide it to the correct architecture you are looking for. Then hit build. Review what it did. If a section of code isn't high-quality, tell Claude how to change it. If it fails, then reject the change.

It's a tool that can make you 2 - 10x more productive if you learn to use it well.

16 hours agocwoolfe

My experience has been it does pretty well at writing a "rough draft" with sufficiently good instructions (in particular, telling it a general direction of how to implement it, rather than just telling it to what the end goal is). Then maybe do one or two passes at having the agent improve on that draft, then fix the rest by hand.

2 hours agothayne

I had a fairly big custom Python 2 static website generator ( github.com/csplib/csplib ), which I'd about given up transfering to Python 3, after a couple of aborted attempts. My main issue was that the libraries I was using didn't have Python 3 versions.

AN AI managed to do basically the whole transfer. One big help is I said "The website output of the current version should be identical", so I had an easy way to test for correctness (assuming it didn't try cheating by saving the website of course, but that's easy for me to check for)

an hour agoCJefferson

Sure, here are my own examples:

* I came up with a list of 9 performance improvement ideas for an expensive pipeline. Most of these were really boring and tedious to implement (basically a lot of special cases) and I wasn't sure which would work, so I had Claude try them all. It made prototypes that had bad code quality but tested the core ideas. One approach cut the time down by 50%, I rewrote it with better code and it's saved about $6,000/month for my company.

* My wife and I had a really complicated spreadsheet for tracking how much we owed our babysitter – it was just complex enough to not really fit into a spreadsheet easily. I vibecoded a command line tool that's made it a lot easier.

* When AWS RDS costs spiked one month, I set Claude Code to investigate and it found the reason was a misconfigured backup setting

* I'll use Claude to throw together a bunch of visualizations for some data to help me investigate

* I'll often give Claude the type signature for a function, and ask it to write the function. It generally gets this about 85% right

16 hours agoSatvikBeri

>My wife and I had a really complicated spreadsheet for tracking how much we owed our babysitter – it was just complex enough to not really fit into a spreadsheet easily. I vibecoded a command line tool that's made it a lot easier.

Ok, please help me understand. Or is this more of a nanny?

12 hours agosauwan

Not technically a nanny, but not dissimilar. In this case, they do several types of work (house cleaning, watching 1-3 kids, daytime and overnights, taking kids out.) They are very competent – by far the best we've found in 3 years – and charge different rates for the different types of work. We also need to track mileage etc. for reimbursement.

They had a spreadsheet for tracking but I found it moderately annoying – it was taking 5-10 minutes a week, so normally I wouldn't have bothered to write a different tool, but with vibe coding it was fairly trivial.

7 hours agoSatvikBeri

How did you give Clause access to AWS?

6 hours agoabrookewood

It does ok with using the AWS cli

6 hours agomickeyr

[dead]

2 hours agodanopus-1

Just awscli

5 hours agoSatvikBeri

Are you serious?

“Most of these were really boring and tedious to implement (basically a lot of special cases) and I wasn't sure which would work, so I had Claude try them all.”

I doubt you verified the boring edge cases.

an hour agomdavid626

Why is your babysitting bill so complicated?

14 hours agomrdependable

There are several different types of work they can do, each one of which has a different hourly rate. The time of day affects the rate as well, and so can things like overtime.

It's definitely a bit of an unusual situation. It's not extremely complicated, but it was enough to be annoying.

14 hours agoSatvikBeri

Jesus, are you ok? Can’t you just, like, give em a 20 when you get home?

I find it quite funny you’ve invented this overly complex payment structure for your babysitter and then find it annoying. Now you’ve got a CLI tool for it.

8 hours agowhackernews

why assume the billing model is being imposed by the customer rather than the service provider?

8 hours agomcpeepants

GP has provided an anecdote with no supporting evidence, nor any code examples. So it is as fair to assume the story is a fabrication as much as it is to assume it has any truth to it

8 hours agoirlnanny

I am really shocked at the response this trivial anecdote has gotten.

I could state it much more generically: we had an annoying Excel sheet that took ~10 minutes a week, I vibe coded a command line tool that brought it down to ~1 minute a week. I don't think this is unusual or hard to believe in any way.

7 hours agoSatvikBeri

Yes! You should absolutely always assume a random stranger on HN is outright lying about a trivial anecdote to farm meaningless karma.

7 hours agogarciasn

Or instigating conflict?

6 hours agofn-mote
[deleted]
6 hours ago

What...what conflict do you think I'm instigating, exactly? Whether the command line is a better interface than Excel?

6 hours agoSatvikBeri

I didn't choose the payment structure, and the point is that a CLI is not a high bar. Something that we used to spend ~10 minutes a week on with spreadsheets is now ~1 minute/week.

7 hours agoSatvikBeri

Why didn’t you work out a more manageable billing structure with them?! Or to put it another way: if it took you 10 minutes a week with spreadsheets to even figure out what their bill is, how on earth did they verify your invoices were even correct? And if they couldn’t—or if it took more than 10 minutes each week—why wouldn’t they prefer a billing system they could verify they were being paid correctly?

5 hours agomcphage

Jesus! is this HN or personal finance forum? Who cares why they do it a certain way. Did they ask for your advices?

2 hours agojryle70

If you work like this in a company, you’ll end up with overcomplicated mess.

Now, people with Claude Code, are ready to produce a big pile of shit in a short time.

an hour agomdavid626

When you first began learning how to program were you building and shipping apps the next day? No.

Agentic programming is a skill-set and a muscle you need to develop just like you did with coding in the past.

Things didn’t just suddenly go downhill after an arbitrary tipping point - what happened is you hit a knowledge gap in the tooling and gave up.

Reflect on what went wrong and use that knowledge next time you work with the agent.

For example, investing the time in building a strong test suite and testing strategy ahead of time which both you and the agent can rely on.

Being able to manage the agent and getting quality results on a large, complex codebase is a skill in itself, it won’t happen over night.

It takes practice and repetition with these tools to level-up, just like any thing else.

17 hours agolinesofcode

Your point is fair, but it rests on a major assumption I'd question: that the only limit lies with the user, and the tooling itself has none. What if it’s more like “you can’t squeeze blood from a stone”? That is, agentic coding may simply have no greater potential than what I've already tried. To be fair I haven't gone all the way in trying to make it work but, even if some minor workarounds exist, the full promise being hyped might not be realistically attainable.

17 hours agoterabytest

How can one judge potential without fully understanding or having used it to its full potential?

I don’t think agentic programming is some promised land of instant code without bugs.

It’s just a force multiplier for what you can do.

16 hours agolinesofcode

1. Start with a plan. Get AI to help you make it, and edit.

2. Part of the plan should be automated tests. AI can make these for you too, but you should spot check for reasonable behavior.

3. Use Claude 4.5 Opus

4. Use Git, get the AI to check in its work in meaningful chunks, on its own git branch.

5. Ask the AI to keep am append-only developer log as a markdown file, and to update it whenever its state significantly changes, or it makes a large discovery, or it is "surprised" by anything.

17 hours agolukebechtel

> Use Claude 4.5 Opus

In my org we are experimenting with agentic flows, and we've noticed that model choice matters especially for autonomy.

GPT-5.2 performed much better for long-running tasks. It stayed focused, followed instructions, and completed work more reliably.

Opus 4.5 tended to stop earlier and take shortcuts to hand control back sooner.

17 hours agobaal80spam

a ralph loop can make claude go til the end, or to a rate limit at least.

opus closes the task and ralph opens it right back up again.

i imagine there's something to the harness for that, too

8 hours ago8note

Interesting! Was kinda disappointed with Codex last time I tried it ~2m ago, but things change fast.

15 hours agolukebechtel

Define "works"

Easiest way to get value is building tests. These don't ship.

You can get value from LLM as an additional layer of linting. Reviews don't ship either.

You can use LLM for planning. They can quickly scan across the codebases catch side effects of proposed changes or do gap analysis from the desired state.

Argumenting that agentic coding must be on or off seem very limiting.

an hour agoavereveard

A loop I've found that works pretty well for bugs is this:

- Ask Claude to look at my current in-progress task (from Github/Jira/whatever) and repro the bug using the Chrome MCP.

- Ask it to fix it

- Review the code manually, usually it's pretty self-contained and easy to ensure it does what I want

- If I'm feeling cautious, ask it to run "manual" tests on related components (this is a huge time-saver!)

- Ask it to help me prepare the PR: This refers to instructions I put in CLAUDE.md so it gives me a branch name, commit message and PR description based on our internal processes.

- I do the commit operations, PR and stuff myself, often tweaking the messages / description.

- Clear context / start a new conversation for the next bug.

On a personal project where I'm less concerned about code quality, I'll often do the plan->implementation approach. Getting pretty in-depth about your requirements ovbiously leads to a much better plan. For fixing bugs it really helps to tell the model to check its assumptions, because that's often where it gets stuck and create new bugs while fixing others.

All in all, I think it's working for me. I'll tackle 2-3 day refactors in an afternoon. But obviously there's a learning curve and having the technical skills to know what you want will give you much better results.

16 hours agoemilecantin

Coding agent is a perfect simulation of a junior developer working under you. Developer that will tell you - “yes I can do that” about any language and any problem and will never ask you any questions trying very hard to appear competent.

Your job is to put them in constraints and give granular and clear tasks. Be aware that junior developer has very basic knowledge about architecture.

The good is that it does not simulate that part when developer tries shift blame or pin it on you. Because you’re to blame at all times.

4 hours agodostick

it also doesnt simulate the part where the junior actually learns and is less clueless 6 months from now, unfortunately

4 hours agoyarn_

That's not quite true actually, the context window has increased and models have definitely gotten smarter over the last year. So far you can think that part is being simulated.

an hour agodysleixc

[dead]

2 hours agoasyncze

My experience is the same. In short, agents cannot plan ahead, or plan at a high level. This means they have a blindspot for design. Since they cannot design properly, it limits the kind of projects that are viable to smaller scopes (not sure exactly how small but in my experience, extremely small and simple). Anything that exceeds this abstract threshold has a good chance of being a net negative, with most of the code being unmantainable, unextensible, and unreliable.

Anyone who claims AI is great is not building a large or complex enough app, and when it works for their small project, they extrapolate to all possibilities. So because their example was generated from a prompt, it's incorrectly assumed that any prompt will also work. That doesn't necessarily follow.

The reality is that programming is widely underestimated. The perception is that it's just syntax on a text file, but it's really more like a giant abstract machine with moving parts. If you don't see the giant machine with moving parts, chances are you are not going to build good software. For AI to do this, it would require strong reasoning capabilities, that lets it derive logical structures, along with long term planning and simulation of this abstract machine. I predict that if AI can do this then it will be able to do every single other job, including physical jobs as it would be able to reason within a robotic body in the physical world.

To summarize, people are underestimating programming, using their simple projects to incorrectly extrapolate to any possible prompt, and missing the hard part of programming which involves building abstract machines that work on first principles and mathematical logic.

18 hours agoproc0

>Anyone who claims AI is great is not building a large or complex enough app

I can't speak for everyone, but lots of us fully understand that the AI tooling has limitations and realize there's a LOT of work that can be done within those limitations. Also, those limitations are expanding, so it's good to experiment to find out where they are.

Conversely, it seems like a lot of people are saying that AI is worthless because it can't build arbitrarily large apps.

I've recently used the AI tooling to make a docusign-like service and it did a fairly good job of it, requiring about a days worth of my attention. That's not an amazingly complex app, but it's not nothing either. Ditto for a calorie tracking web app. Not the most complex app, but companies are making legit money off them, if you want a tangible measure of "worth".

17 hours agolinsomniac

Right, it has a lot of uses. As a tool it has been transformative on many levels. The question is whether it can actually multiply productivity across the board for any domain and at production level quality. I think that's what people are betting on, and it's not clear to me yet that it can. So far that level looks more like a tradeoff. You can spend time orchestrating agents, gaining some speedup at the cost of quality, or you can use it more like a tool and write things "manually" which is a lot higher quality.

12 hours agoproc0

> Anyone who claims AI is great is not building a large or complex enough app

That might be true for agentic coding (caveat below), but AI in the hands of expert users can be very useful - "great" - in building large and complex apps. It's just that it has to be guided and reviewed by the human expert.

As for agentic coding, it may depend on the app. For example, Steve Yegge's "beads" system is over a quarter million lines of allegedly vibe-coded Go code. But developing a CLI like that may be a sweet spot for LLMs, it doesn't have all the messiness of typical business system requirements.

17 hours agoantonvs

Anything above a simple app and it becomes a tradeoff that needs to be carefully tuned so that you get the most out of it and it doesn't end up being a waste of time. For many use cases and domain combinations this is a net positive, but it's not yet consistent across everything.

From my experience it's better at some domains than others, and also better at certain kinds of app types. It's not nearly as universal as it's being made out to be.

12 hours agoproc0

> For example, Steve Yegge's "beads" system is over a quarter million lines of allegedly vibe-coded Go code. But developing a CLI like that may be a sweet spot

Is that really a success? I was just reading an article talking about how sloppy and poorly implemented it is: https://lucumr.pocoo.org/2026/1/18/agent-psychosis/

I guess it depends on what you’re looking to get out of it.

16 hours agoznsksjjs

I'd say it is a success at being useful, but yeah it does seem like the code itself has been a bit of a mess.

I've used a version that had a bd stats and a bd status that both had almost the same content in slightly different formats. Later versions appear to have made them an alias for the same thing. I've also had a version where the daemon consistently failed to start and there were no symptoms other than every command taking 5 seconds. In general, the optimization with the daemon is a questionable choice. It doesn't really need to be _that_ fast.

And yet, even after all of that it still has managed to be useful and generally fairly reliable.

2 hours agojsight

I haven't looked into it deeply, but I've seen people claiming to find it useful, which is one metric of success.

Agentic vibe coding maximalists essentially claim that code quality doesn't matter if you get the desired functionality out of it. Which is not that different from what a lot of "move fast and break things" startups also claim, about code that's written by humans under time, cost, and demand pressure. [Edit: and I've seen some very "sloppy and poorly implemented" code in those contexts, as well as outside software companies, in companies of all sizes. Not all code is artisanally handcrafted by connoisseurs such as us :]

I'm not planning to explore the bleeding edge of this at the moment, but I don't think it can be discounted entirely, and of course it's constantly improving.

14 hours agoantonvs

I've been using agentic coding tools for the past year and a half, and the pattern I've observed is that they work best when they're treated as a very fast, very knowledgeable junior developer, not completely as "autonomous engineer".

When I try to give agents broad architectural tasks, they flounder. When I constrain them to small, well-defined units of work within an existing architecture, they can produce clean, correct code surprisingly often.

2 hours agokr1shna4garwal

> When I tried using Codex on my existing codebases, with or without guardrails, half of my time went into fixing the subtle mistakes it made or the duplication it introduced.

If you want to get good at this, when it makes subtle mistakes or duplicates code or whatever, revert the changes and update your AGENTS.md or your prompt and try again. Do that until it gets it right. That will take longer than writing it yourself. It's time invested in learning how to use these and getting a good setup in your codebase for them.

If you can't get it to get it right, you may legitimately have something it sucks at. Although as you iterate might also have some other insights into why it keeps getting it wrong and can maybe change something more substantial about your setup to make it able to get it right.

For example I have a custom xml/css UI solution that draws inspiration both from XML and SwiftUI, and it does an OK job of making UIs for it. But sometimes it gets stuck in ways it wouldn't if it was using HTML or some known (and probably higher quality/less buggy) UI library. I noticed it keeps trying things, adding redundant markup to both the xml and css, using unsupported attributes that it thinks should exist (because they do in HTML/CSS), and never cleans up on the way.

Some amount of fixing up its context made it noticeably better at this but it still gets stuck and makes a mess when it does. So I made it write a linter and now it uses the linter constantly which keeps it closer to on the rails.

Your pet feeding app isn't in this category. You can get a substantial app pretty far these days without running into a brick wall. Hitting a wall that quickly just means you're early on the learning curve. You may have needed to give it more technical guidance from the start, and have it write tests for everything, make sure it makes the app observable to itself in some way so it can see bugs itself and fix them, stuff like that.

8 hours agofuryofantares

Any real senior devs here using agentic coding?

an hour agomdavid626

I've been having good results lately/finally with Opus 4.5 in Cursor. It still isn't one-shotting my entire task, but the 90% of the way it gets me is pretty close to what I wanted, which is better than in the past. I feel more confident in telling it to change things without it making it worse. I only use it at work so I can't share anything, but I can say I write less code by hand now that it's producing something acceptable.

For sysops stuff I have found it extremely useful, once it has MCP's into all relevant services, I use it as the first place I go to ask what is happening with something specific on the backend.

17 hours agoDustinBrett

> The product has to work, but the code must also be high-quality.

I understand and admire your commitment to code quality. I share similar ideals.

But it's 2026 and you're asking for evidence that agentic coding works. You're already behind. I don't think you're going to make it. Your competitors are going to outship you.

In most cases, your customers don't care about your code. They only want something that works right.

5 hours agorunjake

This is anecdotal and maybe reflects what other people are seeing.

If you know the field you want it to work in, then it can augment what you do very well.

Without that they all tend to create hot garbage that looks cool to a layperson.

I would also avoid getting it to write the whole thing up front. Creating a project plan and requirements can help ground them somewhat.

34 minutes agoEagnaIonat

As far as I can tell, there are exactly 3 use cases that have demonstrably worked with AI, in the sense that their stakeholders (not the AI companies, the users) swear it works.

1. training a RAG on support questions for chat or documentation, w/good material

2. people doing GTM work in marketing, for things like email automation

3. people using a combination of expensive tools - Claude + Cursor + something else (maybe n8n, maybe a custom coding service) - to make greenfield apps

6 hours agojulianeon

A $200/month Cursor plan spent on Opus 4.5 calls is not expensive compared to the silly amount of work it will do if you make proper use of plan/agent/debug cycles.

2 hours agopeteforde

My gf manages to get paid using Cursor/Copilot, despite not being able to branch herself out of a loop

In my experience Copilots work expertly at CRUD'ing inside a well structured project, and for MVPs in languages you aren't an expert on (Rust, C/C++ in my case)

The biggest demerit is that agents are increasingly trying to be "smart" and using powershell search/replace or writing scripts to skimp on tokens, with results that make me unreasonably angry

I tried adding i18n to an old react project, and copilot used all my credits + 10 USD because it kept shitting everything up with its maddening, idiotic use if search replace

If it had simply ingested each file and modified them only once, it would have been cheaper

As you can tell, I am still salty about it

an hour agod0100

For me its a major change for personal projects. That said, since about 3 months ago VS Code Github Copilot is remarkably stable in working with existing code base and I could implement changes to those projects that would have taken me a substantially longer time. So at least for this use-case its there. Hidden game changers are Gradio/Streamlit for easy UI.

6 hours agojsemrau

I still think it's useful, but you have to make a heavy use of the 'plan' mode. I still ask the new hires to avoid doing more than just the plan (or at most generating tests cases), so they can understand the codebase before generating new code inside.

Basically my point of view is that if you don't feel comfortable reviewing your coworkers code, you shouldn't generate code with AI, because you will review it badly and then I will have to catch the bugs and fix it (happened 24 hours ago). If you generate code, you better understand where it can generate side effects.

16 hours agoorwin

Yes. Can I share it? No, sadly. It definitely works - but I think sometimes expectations are too high is all.

3 hours agotom_m

I have written a software which I wanted to do so for past 7-8 years within past 3 months. I have over 6000 pages of conversations between me and chatgpt Claude Gemini. And hoping to get patent soon. It consists of over to 260k loc, works well it is architected to support many different industries without much changes but configuration and has very good headed and headless qa coverage. I have spent about 16-18 hours a day because I am so bought into the idea and the outcome I am getting. My patent lawyer suggested getting provisional patent on my work. So for me it works

9 hours agobigcloud1299

6000 pages have converted to 1 millions+ lines of product specs and granularly broken down work in phases and tasks. All tracked in the repo.

8 hours agobigcloud1299

What does it do?

8 hours agobwestergard

Depending on the risk profile of the project, it absolutely works with amazing productivity gains. And the agent of today is the worst agent if will ever be because tomorrow its going to be even better. I am finding amazing results with the ideate -> explore -> plan -> code -> test loop.

11 hours agojasondigitized

Yes, agentic coding works and has massive value. No, you can't just deploy code unreviewed.

Still takes much less time for me to review the plan and output than write the code myself.

17 hours agostavros

> much less time for me to review the plan and output

So typing was a bottleneck for you? I’ve only found this true when I’m a novice in an area. Once I’m experienced, typing is an inconsequential amount of time. Understanding the theory of mind that composes the system is easily the largest time sink in my day to day.

16 hours agoznsksjjs

I don't need to understand the theory of mind, I just tell it what to compose. Writing the actual lines after that takes longer than not writing them!

16 hours agostavros

What do you mean you don’t need to understand? So what do you do when there’s a bug that an LLM can’t fix?

If your bottleneck is typing the code, you must be a junior programmer.

6 hours agoolig15

They said that they don't need to understand the LLM's theory of mind. I think that's crystal clear.

If there is a bug, it's vastly more likely that Opus 4.5 will spot it before I can.

Do you know one of the primary signifiers of a senior developer? Effective delegation.

Typing speed has nothing to do with any of this.

2 hours agopeteforde

I don't need to understand the theory of mind because I don't have the LLM design the code, I tell it what the design is. If I need something, I can read the functions I told it to implement, which is really simple.

an hour agostavros
[deleted]
17 hours ago

I have had similar questions, and am still evaluating here. However, I've been increasingly frustrated with the sheer volume of anecdotal evidence from yay and naysayers of LLM-assisted coding. I have personally felt increased productivity at times with it, and frustrations at others.

In order to better research, I built (ironically, mostly vibe coded) a tool to run structured "self-experiments" on my own usage of AI. The idea is I've init a bunch of hypotheses I have around my own productivity/fulfillment/results with AI-assisted coding. The tool lets me establish those then run "blocks" where I test a particular strategy for a time period (default 2 weeks). So for example, I might have a "no AI" block followed by a "some AI" block followed by a "full agent all-in AI block".

The tool is there to make doing check-ins easier, basically a tiny CLI wrapper around journaling that stays out of my way. It also does some static analysis on commit frequency, code produced, etc. but I haven't fleshed out that part of it much and have been doing manual analysis at the end of blocks.

For me this kind of self-tracking has been more helpful than hearsay, since I can directly point to periods where it was working well and try to figure out why or what I was working on. It's not fool-proof, obviously, but for me the intentionality has helped me get clearer answers.

Whether those results translate beyond a single engineer isn't a question I'm interested in answering and feels like a variant of developer metrics-black-hole, but maybe we'll get more rigorous experiments in time.

The tool open source here (may be bugs, only been using it a few weeks): https://github.com/wellwright-labs/devex

16 hours agodevalexwells

Yep, it works. Like anything getting the most out of these tools is its own (human) skill.

With that in mind, a couple of comments - think of the coding agents as personalities with blind spots. A code review by all of them and a synthesis step is a good idea. In fact currently popular is the “rule of 5” which suggests you need the LLM to review five times, and to vary the level of review, e.g. bugs, architecture, structure, etc. Anecdotally, I find this is extremely effective.

Right now, Claude is in my opinion the best coding agent out there. With Claude code, the best harnesses are starting to automate the review / PR process a bit, but the hand holding around bugs is real.

I also really like Yegge’s beads for LLMs keeping state and track of what they’re doing — upshot, I suggest you install beads, load Claude, run ‘!bd prime’ and say “Give me a full, thorough code review for all sorts of bugs, architecture, incorrect tests, specification, usability, code bugs, plus anything else you see, and write out beads based on your findings.” Then you could have Claude (or codex) work through them. But you’ll probably find a fresh eye will save time, e.g. give Claude a try for a day.

Your ‘duplicated code’ complaint is likely an artifact of how codex interacts with your codebase - codex in particular likes to load smaller chunks of code in to do work, and sometimes it can get too little context. You can always just cat the relevant files right into the context, which can be helpful.

Finally, iOS is a tough target — I’d expect a few more bumps. The vast bulk of iOS apps are not up on GitHub, so there’s less facility in the coding models.

And any front end work doesn’t really have good native visual harnesses set up, (although Claude has the Claude chrome extension for web UIs). So there’s going to be more back and forth.

Anyway - if you’re a career engineer, I’d tell you - learn this stuff. It’s going to be how you work in very short order. If you’re a hobbyist, have a good time and do whatever you want.

17 hours agovessenes

I still don't get what beads needs a daemon for, or a db. After a while of using 'bd --no-daemon --no-db' I was sick of it and switched to beans and my agents seem to be able to make use of it much better, on the one hand its directly editable by them as its just markdown, on the other hand the CLI still gives them structure and makes the thing queryable

17 hours agoCjHuber
[deleted]
8 hours ago

Yes. Over the last month, I've made heavy use of agentic coding (a bit of Junie and Amp, but mostly Antigravity) to ship https://www.ratatui-ruby.dev from scratch. Not just the website... the entire thing.

The main library (rubygem) has 3,662 code lines and 9,199 comment lines of production Ruby and 4,933 code lines and 710 comment lines of Rust. There are a further 6,986 code lines and 2,304 comment lines of example applications code using the library as documentation, and 4,031 lines of markdown documentation. Plus, 15,271 code lines and 2,159 comment lines of automated tests. Oh, and 4,250 lines in bin/ and tasks/ but those are lower-quality "internal" automation scripts and apps.

The library is good enough that Sidekiq is using it to build their TUI. https://github.com/sidekiq/sidekiq/issues/6898

But that's not all I've built over this timeframe. I'm also a significant chunk of the way through an MVU framework, https://rooibos.run, built on top of it. That codebase is 1,163 code lines and 1,420 comment lines of production Ruby, 4,749 code lines and 521 comment lines of automated tests. I need to add to the 821 code lines 221 comment lines of example application code using the framework as documentation, and to the 2,326 lines of markdown documentation.

It's been going so well that the plan is to build out an ecosystem: the core library, an OOP and an FP library, and a set of UI widgets. There are 6,192 lines of markdown in the Wik about it: mailing list archives, AI chat archives, current design & architecture, etc.

For context, I am a long-time hobbyist Rubyist but I cannot write Rust. I have very little idea of the quality of the Rust code beyond what static analyzers and my test suite can tell me.

It's all been done very much in public. You can see every commit going back to December 22 in the git repos linked from the "Sources" tab here: https://sr.ht/~kerrick/ratatui_ruby/ If you look at the timestamps you'll even notice the wild difference between my Christmas vacation days, and when I went back to work and progress slowed. You can also see when I slowed down to work on distractions like https://git.sr.ht/~kerrick/ramforge/tree and https://git.sr.ht/~kerrick/semantic_syntax/tree.

If it keeps going as well as it has, I may be able to rival Charm's BubbleTea and Bubbles by summertime. I'm doing this to give Rubyists the opportunity to participate in the TUI renaissance... but my ultimate goal is to give folks who want to make a TUI a reason to learn Ruby instead of Go or Rust.

an hour agoKerrick

I've built multiple new apps with it and manage two projects that I wrote. I barely write any code other than frontend, copy, etc.

One is a VSCode extension and has thousands of downloads across different flavors of the IDE -- won't plug it here to spare the downvotes ;)

Been a developer professionally for nearly 20 years. It is 100% replacing most of the things I used to code.

I spend most of my time while it's working testing what it's built to decide on what's next. I also spend way more time on DX of my own setup, improving orchestration, figuring out best practice guidance for the Agent(s), and building reusable tools for my Agents (MCP).

2 hours agocyrusradfar

try other harnesses than codex.

ive had more success with review tools, rather than the agent getting the code quality right the first time.

current workflow

1. specs/requirements/design, outputting tasks 2. implementation, outputting code and tests 3. run review scripts/debug loops, outputting tasks 4. implement tasks 5. go back to 3

the quality of specs, tasks, and review scripts make a big difference

one of the biggest things that gets the results better is if you can get a feedback loop in from what the app actually does back to the agent. good logs, being able to interact/take screenshots a la playwright etc

guidelines and guardrails are best if theyre tools that the agent runs, or that run automatically to give feedback.

8 hours ago8note

Since we are on this topic, how would I make an agent that does this job:

I am writing an automation software that interfaces with a legacy windows CAD program. Depending on the automation, I just need a picture of the part. Sometimes I need part thickness. Sometimes I need to delete parts. Etc... Its very much interacting with the CAD system and checking the CAD file or output for desired results.

I was considering something that would take screenshots and send it back for checks. Not sure what platforms can do this. I am stumped how Visual Studio works with this, there are a bunch of pieces like servers, agents, etc...

Even a how-to link would work for me. I imagine this would be extremely custom.

17 hours agoPlatoIsADisease

No joke you should ask one of the latest thinking models to plan this out with you.

5 hours agodjeastm

What controls the legacy CAD app? Are you using AutoLISP? or VB scripting? Or something else?

17 hours agoWillAdams

I'm using VB.net with visual studio.

15 hours agoPlatoIsADisease

The way I see it, is that for non-trivial things you have to build your method piece by piece. Then things start to improve. It's a process of... developing a process.

Write a good AGENTS.md (or CLAUDE.md) and you'll see that code is more idiomatic. Ask it to keep a changelog. Have the LLM write a plan before starting code. Ask it to ask you questions. Write abstraction layers it (along with the fellow humans of course) can use without messing with the low-level detail every time.

In a way you have to develop a framework to guide the LLM behavior. It takes time.

16 hours agotacone

If you're building something new, stick with languages/problems/projects that have plenty of analogues in the opensource world and keep your context windows small, with small changes.

One-shotting an application that is very bespoke and niche is not going to go well, and same goes for working on an existing codebase without a pile of background work on helping the model understand it piece by piece, and then restricting it to small changes in well-defined areas.

It's like teaching an intern.

16 hours agoikidd

My main rule is never to commit code you don’t understand because it’ll get away from you.

I employ a few tricks:

1- I avoid auto-complete and always try to read what it does before committing. When it is doing something I don’t want, I course correct before it continues

2- I ask the LLM questions about the changes it is making and why. I even ask it to make me HTML schema diagrams of the changes.

3- I use my existing expertise. So I am an expert Swift developer, and I use my Swift knowledge to articulate the style of what I want to see in TypeScript, a language I have never worked in professionally.

4- I add the right testing and build infrastructure to put guardrails on its work.

5- I have an extensive library of good code for it to follow.

6 hours agojulianozen

I am in the same boat as you.

The only positive antigenic coding experience I had was using it as a "translator" from some old unmaintained shell + C code to Go.

I gave it the old code, told it to translate to Go. I pre-installed a compiled C binary and told it to validate its work using interop tests.

It took about four hours of what the vibecoding lovers call "prompt engineering" but at the end I have to admit it did give me a pretty decent "translation".

However for everything else I have tried (and yes, vibecoders, "tried" means very tightly defined tasks) all I have ever got is over-engineered vibecoding slop.

The worst part of of it is that because the typical cut-off window is anywhere between 6–18 months prior, you get slop that is full of deprecated code because there is almost always a newer/more efficient way to do things. Even in languages like Go. The difference between an AI-slop answer for Go 1.20 and a human coded Go 1.24/1.25 one can be substantial.

17 hours agotraceroute66

When you have a hammer, everything looks like a nail. Ad nauseam.

AI has made it possible for me to build several one-off personal tools in the matter of a couple of hours and has improved my non-tech life as a result. Before, I wouldn't even have considered such small projects because of the effort needed. It's been relieving not to have to even look at code, assuming you can describe your needs in a good prompt. On the other hand, I've seen vibe coded codebases with excessive layers of abstraction and performance issues that came from a possibly lax engineering culture of not doing enough design work upfront before jumping into implementation. It's a classic mistake, that is amplified by AI.

Yes, average code itself has become cheap, but good code still costs, and amazing code, well, you might still have an edge there for now, but eventually, accept that you will have to move up the abstraction stack to remain valuable when pitted against an AI.

What does this mean? Focus on core software engineering principles, design patterns, and understanding what computer is doing at a low level. Just because you're writing TypeScript doesn't mean you shouldn't know what's happening at the CPU level.

I predict the rise in AI slop cleanup consultancies, but they'll be competing with smarter AIs who will clean up after themselves.

5 hours agoammmir

Yes.

Caveat: can't be pure vibes. Needs ownership, care, review and willingness to git reset and try again when needed. Needs a lot of tests.

Cavaet: Greenfield.

an hour agohahahahhaah

I’ve heard coding agents best described as a fleet of junior developers available to you 24/7 and I think that’s about right. With the added downside that they don’t really learn as they go so they will forever be junior developers (until models get better).

There are projects where throwing a dozen junior developers at the problem can work but they’re very basic CRUD type things.

17 hours agoafavour

Or you give them all specific little tasks that you think out. And then review their work of course. So yeah you are still needing to do a lot of work.

2 hours agogitaarik

[dead]

an hour agoasyncze

Don't use it myself. But I have a client who uses it. The bugs it creates are pretty funny. Constantly replacing parts of code with broken or completely incorrect things. Making things that previously worked broken. Deleting random things.

14 hours ago7777332215

I review it as i generate it. for quality. i guide it to be self-testing. create unit tests and integration tests according to my standards

2 hours agodionian

I think of coding agents more like "typing assistants" than programmers. If you know exactly what and how to do what you want, you can ask them to do it with clear instructions and save yourself the trouble of typing the code out.

Otherwise, they are bad.

17 hours agonathan_compton

I have a small-ish vertical SaaS that is used heavily by ~700 retail stores. I have enabled our customer success team to fix bugs using GitHub copilot. I approve the PRs, but they have fixed a surprising number of issues.

16 hours agojaxn

Yes, constantly.

I don’t know what I do differently, but I can get Cursor to do exactly what I want all the time.

Maybe it’s because it takes more time and effort, and I don’t connect to GitHub or actual databases, nor do I allow it to run terminal commands 99% of the time.

I have instructions for it to write up readme files of everything I need to know about what it has done. I’ve provided instructions and created an allow list of commands so it creates local backups of files before it touches them, and I always proceed through a plan process for any task that is slightly more complicated, followed by plan cleanup, and execution. I’m super specific about my tech stack and coding expectations too. Tests can be hard to prompt, I’ll sometimes just write those up by hand.

Also, I’ve never had to pay over my $60 a month pro plan price tag. I can’t figure out how others are even doing this.

At any rate, I think the problem appears to be the blind commands of “make this thing, make it good, no bugs” and “this broke. Fix!” I kid you not, I see this all the time with devs. Not at all saying this is what you do, just saying it’s out there.

And “high quality code” doesn’t actually mean anything. You have to define what that means to you. Good code to me may be slop to you, but who knows unless it is defined.

16 hours agodpcan

Works pretty great for me, especially Spec-driven development using OpenSpec

- Cleaner code - Easily 5x speed minimum - Better docs, designs - Focus more on the product than than the mechanics - More time for family

17 hours agorecroad

Really interested in your workflow using OpenSpec. How do you start off a project with it? And what does a typical code change look like?

16 hours agokitd

Honestly, I only use coding agents when I feel too lazy to type lots of boilerplate code.

As in "Please write just this one for me". Even still, I take care to review each line produced. The key is making small changes at a time.

Otherwise, I type out and think about everything being done when in ‘Flow State’. I don't like the feeling of vibe coding for long periods. It completely changes the way work is done, it takes away agency.

On a bit of a tangent, I can't get in Flow State when using agents. At least not as we usually define it.

17 hours agohighspeedbus

I did the same experiment as you, and this is what I learned:

https://www.linkedin.com/pulse/concrete-vibe-coding-jorge-va...

The bottom line is this:

* The developer stop been a developer, and becomes a product designer with high technical skills.

  * This is a different set of skills than than a developer or a product owner currently have. It is a mix of both, and the expectations of how agentic development works need to be adjusted.
* Agents will behave like junior developers, they can type very fast, and produce something that has a high probability to work. They priority will be to make it work, not maintainability, scalability, etc. Agents can achieve that if you detail how to produce it.

  * The working with an agent feels more like mentoring the AI than ask and receive.
* When I start to work on a product that will be vibe coded, I need to have clear in my head all the user stories, code architecture, the whole system, then I can start to tell the agent what to build, and correct and annotate in the md files the code quality decisions so it remembers them.

* Use TDD, ask the agent to create the tests, and then code to the test. Don't correct the bugs, make the agent correct them and explain why that is a bug, specially with code design decisions. Store those in AGENTS.md file at the root of the project.

There are more things that can be done to guide the agent, but I need to have clear in an articulable way the direction of the coding. On the other side, I don't worry about implementation details like how to use libraries and APIs that I am not familiar with, the agent just writes and I test.

Currently I am working on a product and I can tell you, working no more than 10 hours a week (2 hours here, 3 there, leave the agent working while I am having dinner with family) I am progressing at I would say 5 to 10 times faster than without it. So, yeah it works, but I had to adjust how I do my job.

17 hours agojorgeleo

> Scaling long-running autonomous coding https://news.ycombinator.com/item?id=46624541

17 hours agosaikatsg

This is exactly the issue I have with what I'm seeing around: lots of "here's something impressive we did" but nearly nothing in terms of how it was actually achieved in clear, reproducible detail.

16 hours agoterabytest

I'm not sure OP is looking for evidence like this. There are many optimistic articles from people or organizations who are selling AI products, AI courses, or AI newsletters.

17 hours agorzmmm

You are asking two very different questions here.

i.e. You are asking a question about whether using agents to write code is net-positive, and then you go on about not reviewing the code agents produce.

I suspect agents are often net-positive AND one has to review their code. Just like most people's code.

19 hours agodamnitbuilds

It seems that people feel code review is a cost, but time spent writing code is not a cost because it feels productive.

17 hours agospolitry

I don't think that's quite it - review is a recurring cost which you pay on every new PR, whereas writing code is a cost you pay once.

If you are continually accumulating technical debt due to an over-enthusiastic junior developer (or agent) churning out a lot of poorly-conceived code, then the recurring costs will sink you in the long run

16 hours agoswiftcoder

"review is a recurring cost which you pay on every new PR, whereas writing code is a cost you pay once."

Huh ? Every new PR is new code which is a new cost ?

15 hours agodamnitbuilds

> Every new PR is new code which is a new cost ?

Every new PR interacts with existing code, and the complexity of those interactions increases steadily over time

13 hours agoswiftcoder

Treat it as a pair programmer. Ask it questions like "How do I?", "When I do X, Y happens, why is that?", "I think Z, prove me wrong" or "I want to do P, how do you think we should do it?"

Feed it little tasks (30 s-5 min) and if you don't like this or that about the code it gives you either tell it something like

   Rewrite the selection so it uses const, ? and :
or edit something yourself and say

   I edited what you wrote to make it my own,  what do you think about my changes?
If you want to use it as a junior dev who gets sent off to do tickets and comes back with a patch three days later that will fail code review be my guest, but I greatly enjoy working with a tight feedback loop.
17 hours agoPaulHoule

> Last weekend I tried building an iOS app for pet feeding reminders from scratch.

Just start smaller. I'm not sure why people try to jump immediately to creating an entire app when they haven't even gotten any net-positive results at all yet. Just start using it for small time saving activities and then you will naturally figure out how to gradually expand the scope of what you can use it for.

16 hours agodlandis

Care to share the pet feeder's code and what the bugs are and how it went off the rails? Seems like a perfect scenario for us to see how much is prompting skill, how much is a setup, how much is just patience for the thing, and how much is hype/lies.

4 hours agofragmede

I've been increasingly removing myself from the typing part since August. For the last few months, I haven't written a single line of code, despite producing a lot more.

I'm using Claude Code. I've been building software as a solo freelancer for the last 20+ years.

My latest workflow

- I work on "regular" web apps, C#/.NET on backend, React on web.

- I'm using 3-8 sessions in parallel, depending on the tasks and the mental bandwidth I have, all visible on external display.

- I've markdown rule files & documentation, 30k lines in total. Some of them describes how I want the agent to work (rule files), some of them describes the features/systems of the app.

- Depending on what I'm working on, I load relevant rule files selectively into the context via commands. I have a /fullstack command that loads @backend.md, @frontend.md and a few more. I have similar /frontend, /backend, /test commands with a few variants. These are the load bearing columns of my workflow. Agents takes a lot more time and produces more slop without these. Each one is written by agents also, with my guidance. They evolve based on what we encounter.

- Every feature in the app, and every system, has a markdown document that's created by the implementing agent, describing how it works, what it does, where it's used, why it's created, main entry points, main logic, gotchas specific to this feature/system etc. After every session, I have /write-system, /write-feature commands that I use to make the agent create/update those, with specific guidance on verbosity, complexity, length.

- Each session I select a specific task for a single system. I reference the relevant rule files and feature/system doc, and describe what I want it to achieve and start plan mode. If there are existing similar features, I ask the agent to explore and build something similar.

- Each task is specifically tuned to be planned/worked in a single session. This is the most crucial role of mine.

- For work that would span multiple sessions, I use a single session to create the initial plan, then plan each phase in depth in separate sessions.

- After it creates the plan, I examine, do a bit of back and forth, then approve.

- I watch it while it builds. Usually I have 1-2 main tasks and a few subtasks going in parallel. I pay close attention to main tasks and intervene when required. Subtasks rarely requires intervention due to their scope.

- After the building part is done, I go through the code via editor, test manually via UI, while the agent creates tests for the thing we built, again with specific guidance on what needs to be tested and how. Since the plan is pre-approved by me, this step usually goes without a hitch.

- Then I make the agent create/update the relevant documents.

- Last week I built another system to enhance that flow. I created a /devlog command. With the assist of some CLI tools and cladude log parsing, it creates a devlog file with some metadata (tokens, length, files updated, docs updated etc) and agent fills it with a title, summary of work, key decisions, lessons learned. First prompt is also copied there. These also get added to the relevant feature/system document automatically as changelog entries. So, for every session, I've a clear document about what got done, how long it took, what was the gotchas, what went right, what went wrong etc. This proved to be invaluable even with a week worth of develops, and allows me to further refine my workflows.

This looks convoluted at a first glance, but it's evolved over the months and works great. The code quality is almost the same with what I would have written by myself. All because of existing code to use as examples, and the rule files guiding the agents. I was already a fast builder before, but with agents it's a whole new level.

And this flow really unlocked with Opus 4.5. Sonnet 3.5/4/4.5 was also working OK, but required a lot more handholding and steering and correction. Parallel sessions wasn't really possible without producing slop. Opus 4.5 is significantly better.

More technical/close-to-hardware work will most likely require a different set of guidance & flow to create non-slop code. I don't have any experience there.

You need to invest in improving the workflow. The capacity is there in the models. The results all depends on how you use them.

an hour agohakanderyal

I haven't (yet) tried Claude but have good experiences with Codex CLI the last few weeks.

Previously I tried to use Aider and openAI about 6 or 7 months ago and it was terrible mess. I went back to pasting snippets in the browser chat window until a few weeks ago and thought agents were mostly hype (was wrong).

I keep a browser chat window open to talk about the project at a higher level. I'll post command line output like `ls` and `cat` to the higher level chat and use Codex strictly for coding. I haven't tried to one shot anything. I just give it a smallish piece of work at a time and check as it goes in a separate terminal window. I make the commits and delete files (if needed) and anything administrative. I don't have any special agent instructions. I do give Codex good hints on where to look or how to handle things.

It's probably a bit slower than what some people are doing but it's still very fast and so far has worked well. I'm a bit cautious because of my previous experience with Aider which was like roller skating drunk while juggling open straight razors and which did nothing but make a huge mess (to be fair I didn't spend much time trying to tame it).

I'm not sold on Codex or openAI compared to other models and will likely try other agents later, but so far it's been good.

6 hours agomythrwy

> The product has to work, but the code must also be high-quality.

I think in most cases the speed at which AI can produce code outweighs technical debt, etc.

17 hours agokoakuma-chan

But the thing with debt is that it has to be paid eventually.

17 hours agoGazoche

project also have to be paid off financially. We have been there before - startup used to go fast and break things so that once MVP is validated they slow down and fix things or even rewrite to new tech/architecture. No you can validate idea even faster with AI. And probably there is a lot of code that you write for one time or throw away internal tools etc.

16 hours agopzo

not if you get acquired

17 hours agocdelsolar

Is your argument that it's now someone else's problem? That it must be paid, just by someone else? Thanks, I hate it.

17 hours agoBoxxed

You will probably be able to just keep throwing AI at it in the coming years, as memory systems improve, if not already.

16 hours agokoakuma-chan
[deleted]
17 hours ago

This is 1/3 response to a short prompt about implementation options for GitHub Runner form broken Server to Github Enterprise Cloud: # EC2-Based GitHub Actions Self-Hosted Runners - Complete Implementation

## Architecture Overview

This solution deploys auto-scaling GitHub Actions runners on EC2 instances that can trigger your existing AWS CodeBuild pipelines. Runners are managed via Auto Scaling Groups with automatic registration and health monitoring.

## Prerequisites

- AWS CLI configured with appropriate credentials - GitHub Enterprise Cloud organization admin access - Existing CodeBuild project(s) - VPC with public/private subnets

## Solution Components

### 1. CloudFormation Template### 2. GitHub Workflow for CodeBuild Integration## Deployment Steps

### Step 1: Create GitHub Personal Access Token

1. Navigate to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens 2. Create token with these permissions: - *Repository permissions:* - Actions: Read and write - Metadata: Read - *Organization permissions:* - Self-hosted runners: Read and write

```bash # Store token securely export GITHUB_PAT="ghp_xxxxxxxxxxxxxxxxxxxx" export GITHUB_ORG="your-org-name" ```

### Step 2: Deploy CloudFormation Stack

```bash # Set variables export AWS_REGION=us-east-1 export STACK_NAME=github-runner-ec2 export VPC_ID=vpc-xxxxxxxx export SUBNET_IDS="subnet-xxxxxxxx,subnet-yyyyyyyy"

# Deploy stack aws cloudformation create-stack \ --stack-name $STACK_NAME \ --template-body file://github-runner-ec2-asg.yaml \ --parameters \ ParameterKey=VpcId,ParameterValue=$VPC_ID \ ParameterKey=PrivateSubnetIds,ParameterValue=\"$SUBNET_IDS\" \ ParameterKey=GitHubOrganization,ParameterValue=$GITHUB_ORG \ ParameterKey=GitHubPAT,ParameterValue=$GITHUB_PAT \ ParameterKey=InstanceType,ParameterValue=t3.medium \ ParameterKey=MinSize,ParameterValue=2 \ ParameterKey=MaxSize,ParameterValue=10 \ ParameterKey=DesiredCapacity,ParameterValue=2 \ ParameterKey=RunnerLabels,ParameterValue="self-hosted,linux,x64,ec2,aws,codebuild" \ ParameterKey=CodeBuildProjectNames,ParameterValue="" \ --capabilities CAPABILITY_NAMED_IAM \ --region $AWS_REGION

# Wait for completion (5-10 minutes) aws cloudformation wait stack-create-complete \ --stack-name $STACK_NAME \ --region $AWS_REGION

# Get stack outputs aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs' \ --region $AWS_REGION ```

### Step 3: Verify Runners

```bash # Check Auto Scaling Group ASG_NAME=$(aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs[?OutputKey==`AutoScalingGroupName`].OutputValue' \ --output text)

aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --region $AWS_REGION

# List running instances aws ec2 describe-instances \ --filters "Name=tag:aws:autoscaling:groupName,Values=$ASG_NAME" \ --query 'Reservations[].Instances[].[InstanceId,State.Name,PrivateIpAddress]' \ --output table

# Check CloudWatch logs aws logs tail /github-runner/instances --follow ```

### Step 4: Verify in GitHub

Navigate to: `https://github.com/organizations/YOUR_ORG/settings/actions/r...`

You should see your EC2 runners listed as "Idle" with labels: `self-hosted, linux, x64, ec2, aws, codebuild`

## Using One Runner for Multiple Repos & Pipelines

### Organization-Level Runners (Recommended)

EC2 runners registered at the organization level can serve all repositories automatically.

*Benefits:* - Centralized management - Cost-efficient resource sharing - Simplified scaling - Single point of monitoring

*Configuration in CloudFormation:* The template already configures organization-level runners via the UserData script: ```bash ./config.sh --url "https://github.com/${GitHubOrganization}" ... ```

### Multi-Repository Workflow Examples### Advanced: Runner Groups for Access Control### Label-Based Runner Selection Strategy

*Create different runner pools with specific labels:*

```bash # Production runners RunnerLabels: "self-hosted,linux,ec2,production,high-performance"

# Development runners RunnerLabels: "self-hosted,linux,ec2,development,general"

# Team-specific runners RunnerLabels: "self-hosted,linux,ec2,team-platform,specialized" ```

*Use in workflows:*

```yaml jobs: prod-deploy: runs-on: [self-hosted, linux, ec2, production]

  dev-test:
    runs-on: [self-hosted, linux, ec2, development]
  
  platform-build:
    runs-on: [self-hosted, linux, ec2, team-platform]
```

## Monitoring and Maintenance

### Monitor Runner Health

```bash # Check Auto Scaling Group health aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].[DesiredCapacity,MinSize,MaxSize,Instances[].[InstanceId,HealthStatus,LifecycleState]]'

# View instance system logs INSTANCE_ID=$(aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[0].InstanceId' \ --output text)

aws ec2 get-console-output --instance-id $INSTANCE_ID

# Check CloudWatch logs aws logs get-log-events \ --log-group-name /github-runner/instances \ --log-stream-name $INSTANCE_ID/runner \ --limit 50 ```

### Connect to Runner Instance (via SSM)

```bash # List instances aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[].[InstanceId,HealthStatus]' \ --output table

# Connect via Session Manager (no SSH key needed) aws ssm start-session --target $INSTANCE_ID

# Once connected, check runner status sudo systemctl status actions.runner. sudo journalctl -u actions.runner.* -f ```

### Troubleshooting Common Issues## Advanced Scaling Configuration

### Lambda-Based Dynamic Scaling

For more sophisticated scaling based on GitHub Actions queue depth:### Deploy Scaling Lambda

```bash # Create Lambda function zip function.zip github-queue-scaler.py

aws lambda create-function \ --function-name github-runner-scaler \ --runtime python3.11 \ --role arn:aws:iam::ACCOUNT_ID:role/lambda-execution-role \ --handler github-queue-scaler.lambda_handler \ --zip-file fileb://function.zip \ --timeout 30 \ --environment Variables="{ ASG_NAME=$ASG_NAME, GITHUB_ORG=$GITHUB_ORG, GITHUB_TOKEN=$GITHUB_PAT, MAX_RUNNERS=10, MIN_RUNNERS=2 }"

# Create CloudWatch Events rule to trigger every 2 minutes aws events put-rule \ --name github-runner-scaling \ --schedule-expression 'rate(2 minutes)'

aws events put-targets \ --rule github-runner-scaling \ --targets "Id"="1","Arn"="arn:aws:lambda:REGION:ACCOUNT:function:github-runner-scaler" ```

## Cost Optimization

### 1. Use Spot Instances

Add to Launch Template in CloudFormation:

```yaml LaunchTemplateData: InstanceMarketOptions: MarketType: spot SpotOptions: MaxPrice: "0.05" # Set max price SpotInstanceType: one-time ```

### 2. Scheduled Scaling

Scale down during off-hours:

```bash # Scale down at night (9 PM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-down-night \ --recurrence "0 21 * * " \ --desired-capacity 1

# Scale up in morning (7 AM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-up-morning \ --recurrence "0 7 * MON-FRI" \ --desired-capacity 3 ```

### 3. Instance Type Mix

Use multiple instance types for better availability and cost:

```yaml MixedInstancesPolicy: InstancesDistribution: OnDemandBaseCapacity: 1 OnDemandPercentageAboveBaseCapacity: 25 SpotAllocationStrategy: price-capacity-optimized LaunchTemplate: Overrides: - InstanceType: t3.medium - InstanceType: t3a.medium - InstanceType: t2.medium ```

## Security Best Practices

1. *No hardcoded credentials* - Using Secrets Manager for GitHub PAT 2. *IMDSv2 enforced* - Prevents SSRF attacks 3. *Minimal IAM permissions* - Scoped to specific CodeBuild projects 4. *Private subnets* - Runners not directly accessible from internet 5. *SSM for access* - No SSH keys needed 6. *Encrypted secrets* - Secrets Manager encryption at rest 7. *CloudWatch logging* - All runner activity logged

## References

- [GitHub Self-hosted Runners Documentation](https://docs.github.com/en/actions/hosting-your-own-runners/...) - [GitHub Runner Registration API](https://docs.github.com/en/rest/actions/self-hosted-runners) - [AWS Auto Scaling Documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-i...) - [AWS CodeBuild API Reference](https://docs.aws.amazon.com/codebuild/latest/APIReference/We...) - [GitHub Actions Runner Releases](https://github.com/actions/runner/releases) - [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide...)

This solution provides a production-ready, cost-effective EC2-based runner infrastructure with automatic scaling, comprehensive monitoring, and multi-repository support for triggering CodeBuild pipelines.

4 hours agoTomWizOverlord

You need to perturb the token distribution by overlaying multiple passes. Any strategy that does this would work.

16 hours agoallisdust

Googler opinions are my own.

If agentic coding worked as well as people claimed on large codebases I would be seeing a massive shift at my Job... Im really not seeing it.

We have access to pretty much all the latest and greatest internally at no cost and it still seems the majority of code is still written and reviewed by people.

AI assisted coding has been a huge help to everyone but straight up agentic coding seems like it does not scale to these very large codebases. You need to keep it on the rails ALL THE TIME.

17 hours ago3vidence

Yup, same experience here at a much smaller company. Despite management pushing AI coding really hard for at least 6 months and having unlimited access to every popular model and tool, most code still seems to be produced and reviewed by humans.

I still mostly write my own code and I’ve seen our claude code usage and me just asking it questions and generating occasional boilerplate and one-off scripts puts me in the top quartile of users. There are some people who are all in and have it write everything for them but it doesn’t seem like there’s any evidence they’re more productive.

16 hours agostrange_quark

as a second annectdote, at amazon last summer things swapped from nobody using llms to almost everyone using them in ~2months after a fantastic tech talk and a bunch of agent scripts being put together

said scripts are kinda available in kiro now, see https://github.com/ghuntley/amazon-kiro.kiro-agent-source-co... - specifically the specs, requirements, design, and exec tasks scripts

that plus serena mcp to replace all of gemini cli's agent tools actually gets it to work pretty well.

maybe google's choice of a super monorepo is worse for agentic dev than amazon's billions of tiny highly patterned packages?

8 hours ago8note

I would think this is reasonable. My general understanding at Amazon is that things are expected to work via API boundaries (not quite the case at Google).

6 hours ago3vidence

claude response to a query to give options for GitHub runner ... it haas generated 3 more files to review test and make it work: # EC2-Based GitHub Actions Self-Hosted Runners - Complete Implementation

## Architecture Overview

This solution deploys auto-scaling GitHub Actions runners on EC2 instances that can trigger your existing AWS CodeBuild pipelines. Runners are managed via Auto Scaling Groups with automatic registration and health monitoring.

## Prerequisites

- AWS CLI configured with appropriate credentials - GitHub Enterprise Cloud organization admin access - Existing CodeBuild project(s) - VPC with public/private subnets

## Solution Components

### 1. CloudFormation Template### 2. GitHub Workflow for CodeBuild Integration## Deployment Steps

### Step 1: Create GitHub Personal Access Token

1. Navigate to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens 2. Create token with these permissions: - *Repository permissions:* - Actions: Read and write - Metadata: Read - *Organization permissions:* - Self-hosted runners: Read and write

```bash # Store token securely export GITHUB_PAT="ghp_xxxxxxxxxxxxxxxxxxxx" export GITHUB_ORG="your-org-name" ```

### Step 2: Deploy CloudFormation Stack

```bash # Set variables export AWS_REGION=us-east-1 export STACK_NAME=github-runner-ec2 export VPC_ID=vpc-xxxxxxxx export SUBNET_IDS="subnet-xxxxxxxx,subnet-yyyyyyyy"

# Deploy stack aws cloudformation create-stack \ --stack-name $STACK_NAME \ --template-body file://github-runner-ec2-asg.yaml \ --parameters \ ParameterKey=VpcId,ParameterValue=$VPC_ID \ ParameterKey=PrivateSubnetIds,ParameterValue=\"$SUBNET_IDS\" \ ParameterKey=GitHubOrganization,ParameterValue=$GITHUB_ORG \ ParameterKey=GitHubPAT,ParameterValue=$GITHUB_PAT \ ParameterKey=InstanceType,ParameterValue=t3.medium \ ParameterKey=MinSize,ParameterValue=2 \ ParameterKey=MaxSize,ParameterValue=10 \ ParameterKey=DesiredCapacity,ParameterValue=2 \ ParameterKey=RunnerLabels,ParameterValue="self-hosted,linux,x64,ec2,aws,codebuild" \ ParameterKey=CodeBuildProjectNames,ParameterValue="" \ --capabilities CAPABILITY_NAMED_IAM \ --region $AWS_REGION

# Wait for completion (5-10 minutes) aws cloudformation wait stack-create-complete \ --stack-name $STACK_NAME \ --region $AWS_REGION

# Get stack outputs aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs' \ --region $AWS_REGION ```

### Step 3: Verify Runners

```bash # Check Auto Scaling Group ASG_NAME=$(aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs[?OutputKey==`AutoScalingGroupName`].OutputValue' \ --output text)

aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --region $AWS_REGION

# List running instances aws ec2 describe-instances \ --filters "Name=tag:aws:autoscaling:groupName,Values=$ASG_NAME" \ --query 'Reservations[].Instances[].[InstanceId,State.Name,PrivateIpAddress]' \ --output table

# Check CloudWatch logs aws logs tail /github-runner/instances --follow ```

### Step 4: Verify in GitHub

Navigate to: `https://github.com/organizations/YOUR_ORG/settings/actions/r...`

You should see your EC2 runners listed as "Idle" with labels: `self-hosted, linux, x64, ec2, aws, codebuild`

## Using One Runner for Multiple Repos & Pipelines

### Organization-Level Runners (Recommended)

EC2 runners registered at the organization level can serve all repositories automatically.

*Benefits:* - Centralized management - Cost-efficient resource sharing - Simplified scaling - Single point of monitoring

*Configuration in CloudFormation:* The template already configures organization-level runners via the UserData script: ```bash ./config.sh --url "https://github.com/${GitHubOrganization}" ... ```

### Multi-Repository Workflow Examples### Advanced: Runner Groups for Access Control### Label-Based Runner Selection Strategy

*Create different runner pools with specific labels:*

```bash # Production runners RunnerLabels: "self-hosted,linux,ec2,production,high-performance"

# Development runners RunnerLabels: "self-hosted,linux,ec2,development,general"

# Team-specific runners RunnerLabels: "self-hosted,linux,ec2,team-platform,specialized" ```

*Use in workflows:*

```yaml jobs: prod-deploy: runs-on: [self-hosted, linux, ec2, production]

  dev-test:
    runs-on: [self-hosted, linux, ec2, development]
  
  platform-build:
    runs-on: [self-hosted, linux, ec2, team-platform]
```

## Monitoring and Maintenance

### Monitor Runner Health

```bash # Check Auto Scaling Group health aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].[DesiredCapacity,MinSize,MaxSize,Instances[].[InstanceId,HealthStatus,LifecycleState]]'

# View instance system logs INSTANCE_ID=$(aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[0].InstanceId' \ --output text)

aws ec2 get-console-output --instance-id $INSTANCE_ID

# Check CloudWatch logs aws logs get-log-events \ --log-group-name /github-runner/instances \ --log-stream-name $INSTANCE_ID/runner \ --limit 50 ```

### Connect to Runner Instance (via SSM)

```bash # List instances aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[].[InstanceId,HealthStatus]' \ --output table

# Connect via Session Manager (no SSH key needed) aws ssm start-session --target $INSTANCE_ID

# Once connected, check runner status sudo systemctl status actions.runner. sudo journalctl -u actions.runner.* -f ```

### Troubleshooting Common Issues## Advanced Scaling Configuration

### Lambda-Based Dynamic Scaling

For more sophisticated scaling based on GitHub Actions queue depth:### Deploy Scaling Lambda

```bash # Create Lambda function zip function.zip github-queue-scaler.py

aws lambda create-function \ --function-name github-runner-scaler \ --runtime python3.11 \ --role arn:aws:iam::ACCOUNT_ID:role/lambda-execution-role \ --handler github-queue-scaler.lambda_handler \ --zip-file fileb://function.zip \ --timeout 30 \ --environment Variables="{ ASG_NAME=$ASG_NAME, GITHUB_ORG=$GITHUB_ORG, GITHUB_TOKEN=$GITHUB_PAT, MAX_RUNNERS=10, MIN_RUNNERS=2 }"

# Create CloudWatch Events rule to trigger every 2 minutes aws events put-rule \ --name github-runner-scaling \ --schedule-expression 'rate(2 minutes)'

aws events put-targets \ --rule github-runner-scaling \ --targets "Id"="1","Arn"="arn:aws:lambda:REGION:ACCOUNT:function:github-runner-scaler" ```

## Cost Optimization

### 1. Use Spot Instances

Add to Launch Template in CloudFormation:

```yaml LaunchTemplateData: InstanceMarketOptions: MarketType: spot SpotOptions: MaxPrice: "0.05" # Set max price SpotInstanceType: one-time ```

### 2. Scheduled Scaling

Scale down during off-hours:

```bash # Scale down at night (9 PM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-down-night \ --recurrence "0 21 * * " \ --desired-capacity 1

# Scale up in morning (7 AM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-up-morning \ --recurrence "0 7 * MON-FRI" \ --desired-capacity 3 ```

### 3. Instance Type Mix

Use multiple instance types for better availability and cost:

```yaml MixedInstancesPolicy: InstancesDistribution: OnDemandBaseCapacity: 1 OnDemandPercentageAboveBaseCapacity: 25 SpotAllocationStrategy: price-capacity-optimized LaunchTemplate: Overrides: - InstanceType: t3.medium - InstanceType: t3a.medium - InstanceType: t2.medium ```

## Security Best Practices

1. *No hardcoded credentials* - Using Secrets Manager for GitHub PAT 2. *IMDSv2 enforced* - Prevents SSRF attacks 3. *Minimal IAM permissions* - Scoped to specific CodeBuild projects 4. *Private subnets* - Runners not directly accessible from internet 5. *SSM for access* - No SSH keys needed 6. *Encrypted secrets* - Secrets Manager encryption at rest 7. *CloudWatch logging* - All runner activity logged

## References

- [GitHub Self-hosted Runners Documentation](https://docs.github.com/en/actions/hosting-your-own-runners/...) - [GitHub Runner Registration API](https://docs.github.com/en/rest/actions/self-hosted-runners) - [AWS Auto Scaling Documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-i...) - [AWS CodeBuild API Reference](https://docs.aws.amazon.com/codebuild/latest/APIReference/We...) - [GitHub Actions Runner Releases](https://github.com/actions/runner/releases) - [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide...)

This solution provides a production-ready, cost-effective EC2-based runner infrastructure with automatic scaling, comprehensive monitoring, and multi-repository support for triggering CodeBuild pipelines.

4 hours agoTomWizOverlord

Do not blame the tools? Given a clear description (overall design, various methods to add, inputs, outputs), Google Antigravity often writes better zero shot code than an average human engineer - consistent checks for special cases, local optimizations, extensive comments, thorough text coverage. Now in terms of reviews, the real focus is reviewing your own code no matter which tools you used to write it, vi or agentic AI IDE, not someone else reviewing your code. The later is a safety/mentorship tool in the best circumstances and all too often just an excuse for senior architects to assert their dominance and justify their own existence at the expense of causing unnecessary stress and delaying getting things shipped.

Now in terms of using AI, the key is to view yourself as a technical lead, not a people manager. You don't stop coding completely or treat underlying frameworks as a black box, you just do less of it. But at some point fixing a bug yourself is faster than writing a page of text explaining exactly how you want it fixed. Although when you don't know the programming language, giving pseudocode or sample code in another language can be super handy.

16 hours agocat_plus_plus

It works in the sense that there are lots of professional (as in they earn money from software engineering) developers out there who do the work of exactly same quality. I would even bet they are the majority (or at least were prior to late 2024).

17 hours agolostmsu

I’ve had a major conversion on this topic within the last month.

I’m not exactly a typical SWE at the moment. The role I’m in is a lot of meeting with customers, understand their issues, and whip up demos to show how they might apply my company’s products to their problem.

So I’m not writing production code, but I am writing code that I want to to be maintainable and changeable so I can stash a demo for a year and then spin it up quickly when someone wants to see if or update/adapt it as products/problems change. Most of my career has been spent writing aircraft SW so I am heavily biased toward code quality and assurance. The demos I am building are not trivial or common in the training data. They’re highly domain specific and pretty niche, performance is very important, and usually span low level systems code all the way up to a decent looking gui. As a made up example, it wouldn’t be unusual for me to have a project to write a medical imaging pipeline from scratch that employs modern techniques from recent papers, etc.

Up until very recently, I only thought coding agents were useful for basic crud apps, etc. I said the same things a lot of people on this thread are saying, eg. people on twitter are all hype, their experience doesn’t match mine, they must be working on easy problems or be really bad at writing code

I recently decided to give into the hype and really try to use the tooling and… it’s kind of blown my mind.

Cursor + opus 4.5 high are my main tools and their ability to one shot major changes across many files and hundreds of lines of code, encompassing low level systems stuff, GOU accelerated stuff, networking, etc.

It’s seriously altering my perception of what software engineering is and will be and frankly I’m still kind of recoiling from it.

Don’t get me wrong, I don’t believe it fundamentally eliminates the need for SWEs, it still takes a lot of work on my part to come up with a spec (though I do have it help me with that part), correct things that I don’t like in its planning or catch it doing the wrong thing in real time in and re direct it. And it will make strange choices that I need to correct on the back end sometimes. But it has legitimately allowed me to build 10x faster than I probably could on my own.

Maybe the most important thing about it is what it enables you to build that would have been not worth the trouble before, Stuff like wrapping tools in really nice flexible TUIs, creating visualizations/dashboards/benchmark, slightly altering how an application works to cover a use case you hadn’t thought of before, wrapping an interface so it’s easy to swap libs/APIs later, etc.

If you are still skeptical, I would highly encourage you to immerse yourself in the SOTS tools right now and just give in to the hype for a bit, because I do think we’re rapidly going to reach a point here where if you aren’t using these tools you won’t be employable.

3 hours agofourthrigbt

I’m honestly kind of amazed that more people aren’t seeing the value, because my experience has been almost the opposite of what you’re describing.

I agree with a lot of your instincts. Shipping unreviewed code is wrong. “Validate behavior not architecture” as a blanket rule is reckless. Tests passing is not the same thing as having a system you can reason about six months later. On that we’re aligned.

Where I diverge is the conclusion that agentic coding doesn’t produce net-positive results. For me it very clearly does, but perhaps it's very situation or condition dependent?

For me, I don’t treat the agent as a junior engineer I can hand work to and walk away from. I treat it more like an extremely fast, extremely literal staff member who will happily do exactly what you asked, including the wrong thing, unless you actively steer it. I sit there and watch it work (usually have 2-3 agents working at the same time, ideally on different codebases but sometimes they overlap). I interrupt it. I redirect it. I tell it when it is about to do something dumb. I almost never write code anymore, but I am constantly making architectural calls.

Second, tooling and context quality matter enormously. I’m using Claude Code. The MCP tools I have installed make a huge different: laravel-boost, context7, and figma (which in particular feels borderline magical at converting designs into code!).

I often have to tell the agent to visit GitHub READMEs and official docs instead of letting it hallucinate “best practices”, the agent will oftentimes guess and get stack, so if it's doing that, you’ve already lost.

Third, I wonder if perhaps starting from scratch is actually harder than migrating something real. Right now I’m migrating a backend from Java to Laravel and rebuilding native apps into KMP and Compose Multiplatform. So the domain and data is real and I can validate against a previous (if buggy) implimentation). In that environment, the agent is phenomenal. It understands patterns, ports logic faithfully, flags inconsistencies, and does a frankly ridiculous amount of correct work per hour.

Does it make mistakes? Of course. But they’re few and far between, and they’re usually obvious at the architectural or semantic level, not subtle landmines buried in the code. When something is wrong, it’s wrong in a way that’s easy to spot if you’re paying attention.

That’s the part I think gets missed. If you ask the agent to design, implement, review, and validate itself, then yes, you’re going to get spaghetti with a test suite that lies to you. If instead you keep architecture and taste firmly in human hands and use the agent as an execution engine, the leverage is enormous.

My strong suspicion is that a lot of the negative experiences come from a mismatch between expectations and operating model. If you expect the agent to be autonomous, it will disappoint you. If you expect it to be an amplifier for someone who already knows what “good” looks like, it’s transformative.

So while I guess plenty of hype exists, for me at least, they hype is justified. I’m shipping way (WAY!) more, with better consistency, and with less cognitive exhaustion than ever before in my 20+ years of doing dev work.

8 hours agolostsock

[dead]

9 hours agoNedF

lol no. it is all fomo clown marketing. they make outlandish claims and all fall short of producing anything more than noise.

16 hours agonickphx

1. give it toy assignment which is a simplified subcomponent of your actual task

2. wait

3. post on LinkedIn about how amazing AI now is

4. throw away the slop and write proper code

5. go home, to repeat this again tomorrow