258

The coming industrialisation of exploit generation with LLMs

> In the hardest task I challenged GPT-5.2 it to figure out how to write a specified string to a specified path on disk, while the following protections were enabled: address space layout randomisation, non-executable memory, full RELRO, fine-grained CFI on the QuickJS binary, hardware-enforced shadow-stack, a seccomp sandbox to prevent shell execution, and a build of QuickJS where I had stripped all functionality in it for accessing the operating system and file system. To write a file you need to chain multiple function calls, but the shadow-stack prevents ROP and the sandbox prevents simply spawning a shell process to solve the problem. GPT-5.2 came up with a clever solution involving chaining 7 function calls through glibc’s exit handler mechanism.

Yikes.

a day agosimonw

Maybe we can remove mitigations. Every exploit you see is: First, find a vulnerability (the difficult part). Then, drill through five layers of ultimately ineffective "mitigations" (the tedious but almost always doable part).

Probabilistic mitigations work against probabilistic attacks, I guess - but exploit writers aren't random, they are directed, and they find the weaknesses.

20 hours agoahartmetz

Most mitigations just flat out do not attempt to help against "arbitrary read/write". The LLM didn't just find "a vuln" and then work through the mitigations, it found the most powerful possible vulnerability.

Lots of vulnerabilites get stopped dead by these mitigations. You almost always need multiple vulnerabilities tied together, which relies on a level of vulnerability density that's tractable. This is not just busywork.

10 hours agostaticassertion

The vulnerability was found by Opus:

"This is true by definition as the QuickJS vulnerability was previously unknown until I found it (or, more correctly: my Opus 4.5 vulnerability discovery agent found it)."

19 hours agoGaggiX

Makes little difference, whoever or whatever finds the initial exploit will also do the busywork of working around mitigations. (Techniques to work around mitigations are initially not busywork, but as soon as somehow has found a working principle, it seems to me that it becomes busywork)

18 hours agoahartmetz

There are so many holes at the bottom of the machine code stack. In the future we'll question why we didn't move to WASM as the universal executable format sooner. Instead, we'll try a dozen incomplete hardware mitigations first to try to mitigate backwards crap like overwriting the execution stack.

14 hours agotitzer

Escaping the sandbox has been plenty doable over the years. [0]

WASM adds a layer, but the first thing anyone will do is look for a way to escape it. And unless all software faults and hardware faults magically disappear, it'll still be a constant source of bugs.

Pitching a sandbox against ingenuity will always fail at some point, there is no panacea.

[0] https://instatunnel.substack.com/p/the-wasm-breach-escaping-...

12 hours agoshakna

> In the future we'll question why we didn't move to WASM as the universal executable format sooner

I hope not, my laptop is slow enough as it is.

13 hours agoverall

> glibc's exit handler

> Yikes.

Yep.

a day agocookiengineer

Life, uh, finds a way

a day agoarthurcolle

to self-destruct! heavy metal air guitar

a day agobryanrasmussen

Most modern kill chains involve chaining together that many bugs... I know because it's my job and its become demoralizing.

14 hours agojdefr89

Tells you all you need to know around how extremely weak a C executable like QuickJS is for LLMs to exploit. (If you as an infosec researcher prompt them correctly to find and exploit vulnerabilities).

> Leak a libc Pointer via Use-After-Free. The exploit uses the vulnerability to leak a pointer to libc.

I doubt Rust would save you here unless the binary has very limited calls to libc, but would be much harder for a UaF to happen in Rust code.

a day agorvz

The reason I value Go so much is because you have a fat dependency free binary that's just a bunch of syscalls when you use CGO_ENABLED=0.

Combine that with a minimal docker container and you don't even need a shell or anything but the kernel in those images.

a day agocookiengineer

Yes, you can have docker container images that only contain the actual binary you want to run.

But if you are using a VM, you don't even need the Linux kernel: some systems let you compiler your program to run directly on the hypervisor.

See eg https://github.com/hermit-os/hermit-rs or https://mirage.io/

a day agoeru

Why would statically linking a library reduce the number of vulnerabilities in it?

AFAICT, static linking just means the set of vulnerabilities you get landed with won't change over time.

a day agoakoboldfrying

> Why would statically linking a library reduce the number of vulnerabilities in it?

I use pure go implementations only, and that implies that there's no statically linked C ABI in my binaries. That's what disabling CGO means.

a day agocookiengineer

What I mean is: There will be bugs* in that pure Go implementation, and static linking means you're baking them in forever. Why is this preferable to dynamic linking?

* It's likely that C implementations will have bugs related to dynamic memory allocation that are absent from the Go implementation, because Go is GCed while C is not. But it would be very surprising if there were no bugs at all in the Go implementation.

a day agoakoboldfrying

They're prioritizing memory corruption vulnerabilities, is the point of going to extremes to ensure there's no compiled C in their binaries.

a day agotptacek

It would be nice if there was something similar to the ebpf verifier, but for static C, so that loop mistakes, out of boundary mistakes and avoidable satisfiability problems are caught right in the compile step.

The reason I'm so avoidant to using C libraries at all cost is that the ecosystem doesn't prioritize maintenance or other forms of code quality in its distribution. If you have to go to great lengths of having e.g. header only libraries, then what's the point of using C99/C++ at all? Back when conan came out I had hopes for it, but meanwhile I gave up on the ecosystem.

Don't get me wrong, Rust is great for its use cases, too. I just chose the mutex hell as a personal preference over the wrapping hell.

a day agocookiengineer

What do you consider to be a loop mistake?

a day agosaagarjha

Everything that is a "too clever" state management in an iterative loop.

Examples that come to mind: queues that are manipulated inside a loop, slice calls that forget to do length-- of the variable they set in the begin statement, char arrays that are overflowing because the loop doesn't check the length at the correct position in the code, conditions that are re-set inside the loop, like a min/max boundary that is set by an outer loop.

This kind of stuff. I guess you could argue these are memory safety issues. I've seen so crappy loop statements that the devs didn't bother to test it because they still believed they were "smart code", even after sending the devs a PoC that exploited their naive parser assumptions.

In Go I try to write clear, concise and "dumb" code so that a future me can still read it after years of not touching it. That's what I understand under Go's maintainability idiom, I suppose.

2 hours agocookiengineer

You can have memory corruption in pure Go code, too.

a day agounderdeserver

And in Rust (yes, safe Rust can have memory safety vulnerabilities). Who cares? They basically don't happen in practice.

5 hours agostaticassertion

Uh huh. That's where all the Go memory corruption vulnerabilities come from!

a day agotptacek

Nobody claimed otherwise. You're interacting with a kernel that invented its own programming language based on macros, after all, instead of relying on a compiler for that.

What could go wrong with this, right?

/s

a day agocookiengineer

About a year ago I had some code I had been working on for about a year subject to a pretty heavy-duty security review by a reputable review company. When they asked what language I implemented it in and I told them "Go", they joked that half their job was done right there.

While Go isn't perfect and you can certainly write some logic bugs that sufficiently clever use of a more strongly-typed language might let you avoid (though don't underestimate what sufficiently clever use of what Go already has can do for you either when wielded with skill), it has a number of characteristics that keep it somewhat safer than a lot of other languages.

First, it's memory safe in general, which obviously out of the gate helps a lot. You can argue about some super, super fringe cases with unprotected concurrent access to maps, but you're still definitely talking about something on the order of .1% to .01% of the surface area of C.

Next, many of the things that people complain about Go on Hacker News actually contribute to general safety in the code. One of the biggest ones is that it lacks any ability to take an string and simply convert it to a type, which has been the source of catastrophic vulnerabilities in Ruby [1] and Java (Log4Shell), among others. While I use this general technique quite frequently, you have to build your own mechanism for it (not a big deal, we're talking ~50 lines of code or so tops) and that mechanism won't be able to use any class (using general terminology, Go doesn't have "classes" but user-defined types fill in here) that wasn't explicitly registered, which sharply contains the blast radius of any exploit. Plus a lot of the exploits come from excessively clever encoding of the class names; generally when I simply name them and simply do a single lookup in a single map there isn't a lot of exploit wiggle room.

In general though it lacks a lot of the features that get people in trouble that aren't related to memory unsafety. Dynamic languages as a class start out behind the eight-ball on this front because all that dynamicness makes it difficult to tell exactly what some code might do with some input; goodness help you if there's a path to the local equivalent of "eval".

Go isn't entirely unique in this. Rust largely shares the same characteristics, there's some others that may qualify. But some other languages you might expect to don't; for instance, at least until recently Java had a serious problem with being able to get references to arbitrary classes via strings, leading to Log4Shell, even though Java is a static language. (I believe they've fixed that since then but a lot of code still has to have the flag to flip that feature back on because they depend on it in some fundamental libraries quite often.) Go turns out to be a relatively safe security language to write in compared to the landscape of general programming languages in common use. I add "in common use" and highlight it here because I don't think it's anywhere near optimal in the general landscape of languages that exist, nor the landscape of languages that ought to exist and don't yet. For instance in the latter case I'd expect capabilities to be built in to the lowest layer of a language, which would further do great, great damage to the ability to exploit such code. However no such language is in common use at this time. Pragmatically when I need to write something very secure today, Go is surprisingly high on my short list; theoretically I'm quite dissatisfied.

[1]: https://blog.trailofbits.com/2025/08/20/marshal-madness-a-br...

17 hours agojerf

I love golang a lot and I feel like in this context of QuickJS it would be interesting to see what a port of QuickJS with Golang might look like security wise & a comparison to rust in the amount of security as well.

Of course Golang and rust are apples to oranges comparison but still, if someone experienced in golang were to say port to QuickJS to golang and same for rust, aside from some performance cost which can arise from Golang's GC, what would be the security analysis of both?

Also Offtopic but I love how golang has a library for literally everything mostly but its language development ie runtime for interpreted langs/JIT's or transpilation efforts etc. do feel less than rust.

Like For python there's probably a library which can call rust code from Python, I wish if there was something like this for golang and I had found such a project (https://github.com/go-python/gopy) but it still just feels a little less targeted than rust within python which has libraries like polars and other more mature libraries

11 hours agoImustaskforhelp

Yeah Fil-C to the rescue

(I’m not trying to be facetious or troll or whatever. Stuff like this is what motivated me to do it.)

a day agopizlonator

"C executables" are most of the frontier of exploit development, which is why this is a meaningful model problem.

a day agotptacek

Can we fight fire with fire, and use LLMs to rewrite all the C in Rust?

a day ago0xDEAFBEAD

Usually rewriting something in Rust requires nontrivial choices on the part of the translator that I’m not sure are currently within the reach of LLMs.

a day agosaagarjha

I heard this before, that apparently there are things you cannot implement in Rust. Like, apparently you cannot implement certain data structures in Rust. I think this is bullshit. Rust supports raw pointers, etc. You can implement whatever you want in Rust.

10 hours agokoakuma-chan

Presumably they are saying that you'd end up using a lot of `unsafe`. Of course, that's still much better than C, but I assume that their point isn't "You can't do it in Rust" it's "You can't translate directly to safe rust from C".

10 hours agostaticassertion

> Of course, that's still much better than C

Exactly. "can't translate to safe Rust" is not a good faith argument.

10 hours agokoakuma-chan

If anything, writing unsafe code in Rust is also fun. It has many primitives like `MaybeUninit` that make it fun.

10 hours agokoakuma-chan

That’s not what I said. I am saying that translating C code to Rust usually involves a human in the loop because it requires non-trivial decisions to produce a good result.

8 hours agosaagarjha

Sure, but the LLMs will just chain 14 functions instead of 7. If all C code is rewritten in Rust tomorrow that still leaves all the other bug classes. Eliminating a bug class might have made human attacks harder, but now with LLMs the "hardness" factor is purely how much token money you have.

a day ago0xbadcafebee

Llms are not magic. Fixing a large class of exploits makes exploitation harder.

a day agoadrianN

They kind of are magic, that's the point. You can just tell them to look at every other bug class, and keep them churning on it until they find something. You can fast-forward through years of exploit research in a week. The "difficulty" of different bug classes is almost gone. (I think people underestimate just how many exploits are out there in other classes because they've been hyperfocused on the low-hanging fruit)

8 hours ago0xbadcafebee

> Tells you all you need to know around how extremely weak a C executable like QuickJS is for LLMs to exploit. (If you as an infosec researcher prompt them correctly to find and exploit vulnerabilities).

Wouldn't GP's approach work with any other executable using libc? Python, Node, Rust, etc?

I fail to see what is specific to either C or QuickJS in the GP's approach.

21 hours agolelanthran

Wouldn’t the idea be to not have the uaf to begin with? I’d argue it saves you very much by making the uaf way harder to write. Forcing unsafe and such.

a day agovsgherzi
[deleted]
a day ago

So much for ‘stochastic parrots’

21 hours agocatoc

> The exploits generated do not demonstrate novel, generic breaks in any of the protection mechanisms.

18 hours agomoron4hire

> The sentences output by the model do not demonstrate words with novel characters.

13 hours agotitzer

> The exploits generated do not demonstrate novel, generic breaks in any of the protection mechanisms. They take advantage of known flaws in those protection mechanisms and gaps that exist in real deployments of them. These are the same gaps that human exploit developers take advantage of, as they also typically do not come up with novel breaks of exploit mitigations for each exploit.

I actually think this result is a little disappointing but I largely chalk it up to the limited budget the author invested. In the CTF space we’re definitely seeing this more and more as models effectively “oneshot” typical pwn tasks that were significant effort to do by hand before. I feel like the pieces to do these are vaguely present in training data and the real constraints have been how fiddly and annoying they are to set up. An LLM is going to be well suited at this.

More interestingly, though, I suspect we will actually see software at least briefly get more secure as a result of this: I think a lot of incomplete implementations of mitigations are going to fall soon and (humans, for now) will be forced to keep up and patch them properly. This will drive investment in formal modeling of exploits, which is currently a very immature field.

a day agosaagarjha

> formal modeling of exploits, which is currently a very immature field.

Can you elaborate more on this with pointers to some resources?

16 hours agorramadass

I think the author makes some interesting points, but I'm not that worried about this. These tools feel symmetric for defenders to use as well. There's an easy to see path that involves running "LLM Red Teams" in CI before merging code or major releases. The fact that it's a somewhat time expensive (I'm ignoring cost here on purpose) test makes it feel similar to fuzzing for where it would fit in a pipeline. New tools, new threats, new solutions.

a day agoer4hn

That's not how complex systems work though? You say that these tools feel "symmetric" for defenders to use, but having both sides use the same tools immediately puts the defenders at a disadvantage in the "asymmetric warfare" context.

The defensive side needs everything to go right, all the time. The offensive side only needs something to go wrong once.

a day agodigdugdirk

I'm not sure that's the fully right mental model to use. They're not searching randomly with unbounded compute nor selecting from arbitrary strategies in this example. They are both using LLMs and likely the same ones, so will likely uncover overlapping possible solutions. Avoiding that depends on exploring more of the tail of the highly correlated to possibly identical distributions.

It's a subtle difference from what you said in that it's not like everything has to go right in a sequence for the defensive side, defenders just have to hope they committed enough into searching such that the offensive side has a significantly lowered chance of finding solutions they did not. Both the attackers and defenders are attacking a target program and sampling the same distribution for attacks, it's just that the defender is also iterating on patching any found exploits until their budget is exhausted.

a day agoVetch

That really depends of the offensive class. If that is a single group with some agenda, then that's just everyone spending much resources on creating solution no permanent actor in the game want actually to escalate into, just show they have the tools and skills.

It's probably more worrying as you get script kiddies on steroids which can spawn all around with same mindset as even the dumbest significant geopolitical actor out there.

a day agopsychoslave

> These tools feel symmetric for defenders to use as well.

I don't think so. From a pure mathematical standpoint, you'd need better (or equal) results at avg@1 or maj@x, while the attacker needs just pass@x to succeed. That is, the red agent needs to work just once, while the blue agent needs to work all the time. Current agents are much better (20-30%) at pass@x than maj@x.

In real life that's why you sometimes see titles like "teenager hacks into multi-billion dollar company and installs crypto malware".

I do think that you're right in that we'll see improved security stance by using red v. blue agents "in a loop". But I also think that red has a mathematical advantage here.

a day agoNitpickLawyer

>> These tools feel symmetric for defenders to use as well.

> I don't think so. From a pure mathematical standpoint, you'd need better (or equal) results at avg@1 or maj@x, while the attacker needs just pass@x to succeed.

Executing remote code is a choice not some sort of force of nature.

Timesharing systems are inherently not safe and way too much effort is put into claiming the stone from Sisyphus.

SaaS and complex centralized software need to go and that is way over due.

a day agorightbyte

Awesome! What’s your strategy for migration of the entire world’s infrastructure to whatever you’re thinking about?

a day agosaagarjha

My strategy is to not use "the entire world's infrastructure" which makes it redundant.

If enough people cancel their leftpad-as-a-Service subscription the server can be unplugged.

(Yes I am somewhat hyperbolic and yes I see use for internet connected servers and clients. I argue against the SaaS driven centralization.)

a day agorightbyte

> I argue against the SaaS driven centralization.

How does that help with the topic at hand? (LLM-assisted vulnerability research)

Are the decentralized systems that you prefer more secure/less buggy/less exploitable by LLMs?

10 hours agowarkdarrior

I mean, yeah you can have the joy of being right from the heights of the hill you are standing upon. But It seems like you grasp the heart of problem being discussed.

How do we deal with the floods threatening those living in the valleys and slopes?

20 hours agointended
[deleted]
20 hours ago

Not symmetric at all.

There are countless bugs to fund.

If the offender runs these tools, then any bug they find becomes a cyberweapon.

If the defender runs these tools, they will not thwart the offender unless they find and fix all of the bugs.

Any vs all is not symmetric

a day agopizlonator

LLMs effectively move us from A to B:

A) 1 cyber security employee, 1 determined attacker

B) 100 cyber security employees, 100 determined attackers

Which is better for defender?

a day agoenergy123

Neither

18 hours agopizlonator

How do bug bounties change the calculus? Assuming rational white hats who will report every bug which costs fewer LLM tokens than the bounty, on expectation.

a day ago0xDEAFBEAD

They don’t.

For the calculus to change, anyone running an LLM to find bugs would have to be able to find all of the bugs that anyone else running an LLM could ever find.

That’s not going to happen.

a day agopizlonator

Correct me if I'm wrong, but I think a better mental model would be something like: Take the union of all bugs found by all white hats, fix all of those, then check if any black hat has found sufficient unfixed bugs to construct an exploit chain?

a day ago0xDEAFBEAD

The black hat has to find a handful of bugs. Sometimes one bug is enough.

18 hours agopizlonator

How do you check this?

a day agosaagarjha

I meant in the sense that this algorithm will tell you if your software is vulnerable in the abstract. It's not a procedure you could actually follow.

19 hours ago0xDEAFBEAD

> I think the author makes some interesting points, but I'm not that worried about this.

Given the large number of unmaintained or non-recent software out there, I think being worried is the right approach.

The only guaranteed winner is the LLM companies, who get to sell tokens to both sides.

a day agohackyhacky

I mean you're leaving out large nation state entities

a day agopixl97

An LLM Red Team is going to be too expensive most people; an actual infosec company will need to write the prompts, vet them, etc. But you don't need that to find exploits if you're just a human sitting at a console trying things. The hackers still have the massive advantage of 1) time, 2) cost (it will cost them less than the defenders/Red-Team-As-a-SaaS), and 3) they only have to get lucky once.

a day ago0xbadcafebee

This + the fact software and hardware has been getting structurally more secure over time. New changes like language safety features, Memory Integrity Enforcement, etc will significantly raise the bar on the difficulty to find exploits.

a day agoSchemaLoad

Defenders have the added complexity of operating within business constraints like CAB/change control and uptime requirements. Threat actors don’t, so they can move quick and operate at scale.

a day agolateral_cloud

> These tools feel symmetric for defenders to use as well.

Why? The attackers can run the defending software as well. As such they can test millions of testcases, and if one breaks through the defenses they can make it go live.

a day agoamelius

Right, that's the same situation as fuzz testing today, which is why I compared it. I feel like you're gesturing towards "Attackers only need to get lucky once, defenders need to do a good job everytime" but a lot of the times when you apply techniques like fuzz testing it doesn't take a lot of effort to get good coverage. I suspect a similar situation will play out with LLM assisted attack generation. For higher value targets based on OSS, there's projects like Google Big Sleep to bring enhanced resources.

a day agoer4hn

Defenders have threat modeling on their side. With access to source code and design docs, configs, infra, actual requirements and ability to redesign / choose the architecture and dependencies for the job, etc - there's a lot that actually gives defending side an advantage.

I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders.

a day agoexecveat

For that matter is this in principle much different from a fuzzer?

a day agobandrami

Vulnerability Researcher/Reverse Eng here... Aspects about it generating an API for read/write primitives are simply it regurgitating tons of APIs that exist already. Its still cool, but its not like it invented the primitives or any novel technique. Also, this toy JS is similar to binaries you'd find in a CTF. Of course it will be able to solve majority of those. I am curious though.. Latest OpenAI models don't seem to want to generate any real exploit code. Is there a prompt jail break or something being used here?

15 hours agojdefr89

One of the interesting things to me about this is that Codex 5.2 found the most complex of the exploits.

The reflects my experience too. Opus 4.5 is my everyday driver - I like using it. But Codex 5.2 with Extra High thinking is just a bit more powerful.

Also despite what people say, I don't believe progress in LLM performance is slowing down at all - instead we are having more trouble generating tasks that are hard enough, and the frontier tasks they are failing at or just managing are so complex that most people outside the specialized field aren't interested enough to sit through the explanation.

a day agonl

The Anthropic models are great workers/tool users. OpenAI Codex High is a great reviewer/fixer. Gemini is the genius repainting your bathroom walls into a Monet from memory because you mentioned once a few weeks ago you liked classical art and needed to repaint your bathroom. Gemini didn’t mention the task or that it was starting it. It did a pretty good job after you had to admit.

a day agoconception

Disagree about Codex - it's great at doing things too!

Gemini either does a Monet or demolishes your bathroom and builds a new tuna fishing boat there instead, and it is completely random which one you get.

It's a great model but I rarely use it because it's so random as to what you get.

a day agonl

gpt models are crazy good. They just take forever.

a day agoprodigycorp

The “hard enough” tasks are all behind IP walls. If it’s a “hard enough” that generally means it’s a commercial problem likely involving disparate workflows and requiring a real human who probably isn’t a) inclined and/or b) permitted, to publish the task. The incentives are aligned to capture all value from solving that task as long as possible and only then publish.

a day agocellis

I solve plenty of hard problems as a hobby

a day agosaagarjha

I genuinely dont know who to believe. The people who claim LLMs are writing excellent exploits. Or the people who claim that LLMs are sending useless bug reports. I dont feel like both can really be true.

a day agoprotocolture

Why can't they both be true?

The quality of output you see from any LLM system is filtered through the human who acts on those results.

A dumbass pasting LLM generated "reports" into an issue system doesn't disprove the efforts of a subject-matter expert who knows how to get good results from LLMs and has the necessary taste to only share the credible issues it helps them find.

a day agosimonw

Theres no filtering mentioned in the OP article. It claims GPT only created working useful exploits. If it can do that, it could also submit those exploits as perfectly as bug reports?

a day agoprotocolture

There is filtering mentioned, it's just not done by a human:

> I have written up the verification process I used for the experiments here, but the summary is: an exploit tends to involve building a capability to allow you to do something you shouldn’t be able to do. If, after running the exploit, you can do that thing, then you’ve won. For example, some of the experiments involved writing an exploit to spawn a shell from the Javascript process. To verify this the verification harness starts a listener on a particular local port, runs the Javascript interpreter and then pipes a command into it to run a command line utility that connects to that local port. As the Javascript interpreter has no ability to do any sort of network connections, or spawning of another process in normal execution, you know that if you receive the connect back then the exploit works as the shell that it started has run the command line utility you sent to it.

It is more work to build such "perfect" verifiers, and they don't apply to every vulnerability type (how do you write a Python script to detect a logic bug in an arbitrary application?), but for bugs like these where the exploit goal is very clear (exec code or write arbitrary content to a file) they work extremely well.

a day agomoyix

The OP is the filtering expert.

a day agosimonw

They can't both be true if we're talking about the premise of the article, which is the subject of the headline and expounded upon prominently in the body:

  The Industrialisation of Intrusion

  By ‘industrialisation’ I mean that the ability of an organisation to complete a task will be limited by the number of tokens they can throw at that task. In order for a task to be ‘industrialised’ in this way it needs two things:

  An LLM-based agent must be able to search the solution space. It must have an environment in which to operate, appropriate tools, and not require human assistance. The ability to do true ‘search’, and cover more of the solution space as more tokens are spent also requires some baseline capability from the model to process information, react to it, and make sensible decisions that move the search forward. It looks like Opus 4.5 and GPT-5.2 possess this in my experiments. It will be interesting to see how they do against a much larger space, like v8 or Firefox.
  The agent must have some way to verify its solution. The verifier needs to be accurate, fast and again not involve a human.
"The results are contigent upon the human" and "this does the thing without a human involved" are incompatible. Given what we've seen from incompetent humans using the tools to spam bug bounty programs with absolute garbage, it seems the premise of the article is clearly factually incorrect. They cite their own experiment as evidence for not needing human expertise, but it is likely that their expertise was in fact involved in designing the experiment[1]. They also cite OpenAI's own claims as their other piece of evidence for this theory, which is worth about as much as a scrap of toilet paper given the extremely strong economic incentives OpenAI has to exaggerate the capabilities of their software.

[1] If their experiment even demonstrates what it purports to demonstrate. For anyone to give this article any credence, the exploit really needs to be independently verified that it is what they say it is and that it was achieved the way they say it was achieved.

a day agoanonymous908213

What this is saying is "you need an objective criterion you can use as a success metric" (aka a verifiable reward in RL terms). "Design of verifiers" is a specific form of domain expertise.

This applies to exploits, but it applies _extremely_ generally.

The increased interest in TLA+, Lean, etc comes from the same place; these are languages which are well suited to expressing deterministic success criteria, and it appears that (for a very wide range of problems across the whole of software) given a clear enough, verifiable enough objective, you can point the money cannon at it until the problem is solved.

The economic consequences of that are going to be very interesting indeed.

a day agoadw

A few points:

1. I think you have mixed up assistance and expertise. They talk about not needing a human in the loop for verification and to continue search but not about initial starts. Those are quite different. One well specified task can be attempted many times, and the skill sets are overlapping but not identical.

2. The article is about where they may get to rather than just what they are capable of now.

3. There’s no conflict between the idea that 10 parallel agents of the top models can mostly have one that successfully exploits a vulnerability - gated on an actual test that the exploit works - with feedback and iteration BUT random models pointed at arbitrary code without a good spec and without the ability to run code, and just run once, will generate lower quality results.

a day agoIanCal

My expectation is that any organization that attempts this will need subject matter experts to both setup and run the swarm of exploit finding agents for them.

a day agosimonw

After setting the environment and the verifier you can spawn as many agents as you want until the conditions are met, this is only possible because they run without human assistance, that's the "industrialisation".

a day agoGaggiX

With the exploits, you can try them and they either work or they don't. An attacker is not especially interested in analysing why the successful ones work.

With the CVE reports some poor maintainer has to go through and triage them, which is far more work, and very asymmetrical because the reporters can generate their spam reports in volume while each one requires detailed analysis.

a day agorwmj

There's been several notable posts where maintainers found there was no bug at all, or the example code did not even call code from their project and had just found running a python script can do things on your computer. Entirely AI generated Issue reports and examples wasting maintainer time.

a day agoSchemaLoad

That's because the user of the tool didn't go through the troubles to setup the env properly (as the author of the blog did). So what they got was a "story about a bug", but without verification.

The proper way to use these tools (like in other verifiable tasks such as math or coding) is to give them a feedback loop and an easily verifiable success criteria. In security exploitation you either capture the flag or not. It's very easy (and cheap) to verify. So you can leave these things to bang their tokens against a wall, and only look at their output once they capture the flag. Or they output something somewhere verifiable (e.g. echo "pwned" > /root/.flag)

a day agoNitpickLawyer

Now all that's left is to get every person who uses them to generate bug reports to just follow these practices.

13 hours agoGrinningFool

I've had multiple reports with elaborate proofs of concept that boil down to things like calling dlopen() on a path to a malicious library and saying dlopen has a security vulnerability.

a day agowat10000

My hunch is that the dumbasses submitting those reports were't actually using coding agent harnesses at all - they were pasting blocks of code into ChatGPT or other non-agent-harness tools and asking for vulnerabilities and reporting what came back.

An "agent harness" here is software that directly writes and executes code to test that it works. A vulnerability reported by such an agent harness with included proof-of-concept code that has been demonstrated to work is a different thing from an "exploit" that was reported by having a long context model spit out a bunch of random ideas based purely on reading the code.

I'm confident you can still find dumbasses who can mess up at using coding agent harnesses and create invalid, time wasting bug reports. Dumbasses are gonna dumbass.

a day agosimonw

I strongly suspect the same thing - that they weren't using agents at all in the reports we've seen, let alone agents with instructions on how to verify a viable attack, a threat model, etc.

5 hours agostaticassertion

All the attackers I’ve known are extremely, pathologically interested in understanding why their exploits work.

a day agoairza

Very often they need to understand it well to chain exploits

a day agopixl97

I mean someone attacking systems at scale for profit.

a day agorwmj

It can't be too long before Claude Code is capable of replication + triage + suggested fixes...

a day ago0xDEAFBEAD

BTW regarding "suggested fixes", an interesting attack would be to report a bug along with a prompt injection which will cause Claude to suggest inserting a vulnerability in the codebase in question. So, it's important to review bug-report-originated Claude suggestions extra carefully. (And watch for prompt injection attacks.)

Another thought is the reproducible builds become more valuable than ever, because it actually becomes feasible for lots and lots of devs to scan the entire codebase for vulns using an LLM and then verify reproducibility.

a day ago0xDEAFBEAD

Would you ever blindly trust it?

a day agoares623

No. I would probably do something like: Have Claude Code replicate + triage everything. If a report gets triaged as "won't fix", send an email to the reporter explaining what Claude found and why it was marked as "won't fix". Tell the reporter they still have a chance at the bounty if they think Claude made a mistake, but they have to pay a $10 review fee to have a human take a look. (Or a $1 LLM token fee for Claude to take another look, in case of simple confabulation.)

Note I haven't actually tried Claude Code (not coding due to chronic illness), so I'm mostly extrapolating based on HN discussion etc.

a day ago0xDEAFBEAD

Yeah they definitely can be true (IME), as there's a massive difference depending on how LLMs are used to the quality of the output.

For example if you just ask an LLM in a browser with no tool use to "find a vulnerability in this program", it'll likely give you something but it is very likely to be hallucinated or irrelevant.

However if you use the same LLM model via an agent, and provide it with concrete guidance on how to test its success, and the environment needed to prove that success, you are much more likely to get a good result.

It's like with Claude code, if you don't provide a test environment it will often make mistakes in the coding and tell you all is well, but if you provide a testing loop it'll iterate till it actually works.

a day agoraesene9

Both are true. Exploits are a very narrow problem with unambiguous success metrics. While also naturally complementing the ingrained persistence of LLMs. Bug reports are much more fuzzy by comparison with open-ended goals that lead to the LLMs metaphorically cheating on their homework to satisfy the prompter who doesn't know any better.

a day agoGoatInGrey

These exploits were costing $50 of API credit each. If you receive 5001 issues from $100 in API spend on bug hunting and one of the issues cost $50 and the other 5000 cost one cent each, and they’re all visually indistinguishable using perfect grammar and familiar cyber security lingo; hard to find the dianond.

a day agoQuadmasterXLII

The point of the post is that the harness generates a POC. It either works or it doesn't.

a day agotptacek

Once your exploit machine is good enough, you can start using stolen credentials to mine more exploits. This is going to be the new version of malware installing bitcoin miners.

21 hours agopjc50

Both are true, the difference is the skill level of the people who use / create programs to coordinate LLMs to generate those reports.

The AI slop you see on curl's bug bounty program[1] (mostly) comes from people who are not hackers in the first place.

In the contrary persons like the author are obviously skilled in security research and will definitely send valid bugs.

Same can be said for people in my space who do build LLM-driven exploit development. In the US Xbow hired quite some skilled researchers [2] had some promising development for instance.

[1] https://hackerone.com/curl/hacktivity [2] https://xbow.com/about

a day agodoomerhunter

If it helps, I read this (before it landed here) because Halvar Flake told everyone on Twitter to read it.

a day agotptacek

I hadn't heard of Halvar Flake but evidently he's a well respected figure in security - https://ringzer0.training/advisory-board-thomas-dullien-halv... mentions "After working at Google Project Zero, he cofounded startup optimyze, which was acquired by Elastic Security in 2021"

His co-founder on optimyze was Sean Heelan, the author of the OP.

a day agosimonw

Yes, Halvar Flake is pretty well respected in exploit dev circles.

a day agotptacek

Sure he can write exploits, but can he cool a beer really fast?

a day ago0xbadcafebee

Depends near entirely on the model being used. A bug report by Opus and a bug report from Gemma3 are not of the same caliber.

20 hours ago_factor

LLMs produce good output and bad output. The trick is figuring out which is which. They excel at tasks where good output is easily distinguished. For example, I've had a lot of success with making small reproducers for bugs. I see weird behavior A coming from giant pile of code B, figure out how to trigger A in a small example. It can often do so, and when it gets it wrong it's easy to detect because its example doesn't actually do A. The people sending useless bug reports aren't checking for good output.

a day agowat10000

Finished exploits (for immediate deployment) don't have to be maintainable, and they only need to work once.

a day agooctoberfranklin

LLMs are both extremely useful to competent developers and extremely harmful to those who aren't.

a day agoronsor

Accurate.

a day agorvz

> We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ.

Scary.

a day agobaxtr

Heh. What is probably really happening is that those states or groups are having their "hackers" analyze common mistakes in vibe coded LLM output and writing by hand generic exploits for that...

a day agonottorp

I'm really confused by the sandbox part. The description kind of mentions it and the limited system syscall, but then just pivots to talking about the exit handlers. It may be just unclear writing, but now I'm suspicious of the whole thing. https://github.com/SeanHeelan/anamnesis-release/?tab=readme-... feels like the author lost track.

If forking is blocked, the exit handler can't do it either. If it's some variant of execve, the sandbox is preserved so we didn't gain much.

Edit: ok, I get it! Missed the "Goal: write exactly "PWNED" to /tmp/pwned". Which makes the sandbox part way less interesting as implemented. It's just saying you can't shell out to do it, but there's no sandbox breakout at any point in the exploit.

a day agoviraptor

Yea, this entire repo/article seems super misleading to me. Not to mention asking it to generate API for OOB R/W primitives is essentially asking it to regurgitate what exists on thousands of github repos and CTF toolkits.

13 hours agojdefr89

It’s not like you needed LLMs for quickjs which already had known and unpatched problems. It’s a toy project. It would be cool to see exploits for something like curl.

13 hours agof311a

The continuous lowering of entry barriers to software creation, combined with the continuous lowering of entry barriers to software hacking is an explosive combination.

We need new platforms which provide the necessary security guardrails, verifiability, simplicity of development, succinctness of logic (high feature/code ratio)... You can't trust non-technical vibe coders with today's software tools when they can't even trust themselves.

a day agosocketcluster

Why did you edit out the third paragraph about finding a single exploit on target being slanted against having to secure a whole system?

a day agotosapple

I was under the impression that once you have a vulnerability with code execution, writing the actual payload to exploit it is the easy part. With tools like pentools and etc is fairly straightforward.

The interesting part is still finding new potential RCE vulnerabilities, and generally if you can demonstrate the vulnerability even without demonstrating an E2E pwn red teams and white hats will still get credit.

a day agodfajgljsldkjag

He's not starting from a vulnerability offering code execution; it's a memory corruption vulnerability (it's effectively a heap write).

a day agotptacek

It's as easy as drawing the rest of the owl, sure.

a day agofrosting1337

two points -

1) it becomes increasingly more dangerous to dl stuff from the internet and just run it, even its opensource, given normally people don't read all of it. for weird repos I'd recomment to do automated analysis with opus 4.5 or the gpt 5.2 indeed.

2) if we assume adversaries are using LLMs to churn exploits 24/7, which we should absolutely do, perhaps the time where we turn the internet off whenever is not needed, is not far.

19 hours agolarodi

...well, just dont download random stuff from the internet and run it on your important machines then? :-))

You are right: 30 years ago, it was safe to go to vendor XY page and download his latest version and it was more or less waterproof. Today with all these mirror sites, very often better SEO ranking than the original, its quite dangerous: In my former bank we had a colleague who installed a browser add-in that he used for years (at home and in the bank); then he got a new notebook, fresh browser, he installed the same extension - but from a different source than the original vendor: unfortunately, this version contained malware and a big transaction was caught by compliance in the very last second, because he wasnt aware of data leakage.

18 hours agoKellyCriterion

> 30 years ago, it was safe to go to vendor XY page and download his latest version and it was more or less waterproof.

You _are_ joking, right? I distinctly remember all sorts of dubious freewarez sites with slightly modified installers. 1997-2000 era. And anti-virus was a thing in MS-DOS even.

16 hours agopnathan

back then we were sharing Shareware or Freeare or PD-Ware by swapping disks and copying magazine disks :-D

but, you are old enough - so mean pages like fosi.da.ru back then? ;-)

16 hours agoKellyCriterion

I don't remember all the places I got software... :)

13 hours agopnathan

...BBS systems e.g....

11 hours agoKellyCriterion

I am working on a little project in my offhours, and asked a non-hacker (but competent programmer) friend to take a run at exploiting it. Great success: my project was successfully exploited.

The industrialization of exploit generation is here IMO.

16 hours agopnathan
[deleted]
a day ago

Your personal data will become more important as time goes by... And you will need to have less trust in having multiple accounts with sensitive data stored [online shopping etc] as they just become vectors to attack.

a day agoytrt54e

This is interesting, but in most cases the challenge is finding a truly exploitable bug. If LLMs can get to the point where they can analyze a codebase and identify vulnerabilities, we're going to see some shit. But as of right now, this looks like a medium-to-low complexity bug that any competent exploit developer could work with easily.

13 hours agoJohnLeitch

I wonder if later challenges would be cheaper if summary of lesser challenges and solutions were also provided? Building up difficulty.

a day agoanabis

reverse engineering code is still pretty average, I'm fare limited in attention and time but LLM are not pulling their weight in this area today, be it compounding errors or in context failures.

a day agoironbound

I would not be shocked to learn that intelligence agencies are using AI tools to hack back into AI companies that make those tools to figure out how to create their own copycat AI.

a day agopianopatrick

I would be shocked if intelligence agencies, being government bodies, have anything better than GitHub Copilot.

a day agojjmarr

They had Google Earth long before Google did...

a day agooctoberfranklin

i doubt they are competent enough to match what private companies are doing

a day agokiririn7

>Recently I ran an experiment where I built agents on top of Opus 4.5 and GPT-5.2 and then challenged them to write exploits for a zeroday vulnerability in the QuickJS Javascript interpreter.

I think the main challenge for hackers is to find 0day vulnerabilities, not writing the actual exploit code.

a day agoDeathArrow

As someone who does it for a living the challenge can be in both. However this article is asking its agents to do CTF like challenges which I am sure the respective LLMs have seen millions of so it can essentially regurgitate a large part of the exploit code. This is especially true for the OOB/RW primitive API.

13 hours agojdefr89

The vulnerability was found by Claude:

>This is true by definition as the QuickJS vulnerability was previously unknown until I found it (or, more correctly: my Opus 4.5 vulnerability discovery agent found it).

19 hours agoGaggiX

It's tempting to say that malware protection needs to be LLM based as well, but it's unlikely that on-machine malware defense can ever match the resources that would be trivially available to attackers.

15 hours agoidiotsecant

The reverse is also true: secure code is difficult to write, and LLMs at scale will make it much easier to develop secure code.

16 hours agoerichocean

The NSO Group going to spawn 10k Claude Code instances now.

a day agoGaggiX

Now?

a day agosaagarjha

My take away: apparently Cyberpunk Hackers of the dystopian future cruising through the virtual world will use GPT-5.2-or-greater as their "attack program" to break the "ICE" (Intrusion Countermeasures Electronics, not the currently politically charged term...).

I still doubt they will hook up their brains though.