...and an incredibly handwavy shallow explanation of why this actually works ("Through a clever sequence of oblivious transfers and what’s called multiplicative-to-additive share conversion, they each compute a partial signature.")
I don't get it. If you want a blog, write a blog. If you don't want a blog, don't write a blog. But why use an LLM to create a slopblog? It just wastes EVERYONE's time and energy. How disappointing.
Not sure if it's AI slop yet, but I also found the core part (the "oblivious transfers") to be explained too handwavy to really understand the properties of this system. I don't want to know all the mathematical details, but I do want to understand who is exchanging what data with whom. "oblivious transfer" doesn't tell me anything here.
The other (maybe more interesting) question is how this tech would be deployed. So ok, we have a system, where something can only be signed/decrypted/encrypted/etc if several parties are in agreement. Who should the parties be? How is the threshold itself actually managed?
OP also seems to drift between different usage scenarios here:
- some sort of collectively owned good (like the DAO or resources in a cooperative?) - seems straightforward on a technical level (every owner has a partial key) but also a niche usecase and quite inflexible: What happens if an owner drops out or you want to introduce a new one? What happens if you want to change the quorum?
- traditional authentication of individual users against a server, in a federated setup like the fediverse: Seems like the most practical usecase: One party is the user, the other is the server, the verifying party would be other servers of the network. But then you have to pick your poison by how you set the quorum: Either the quorum is "any party can decrypt the data", at which point you're not better than normal password auth; or "both parties are needed", which would protect against the user or the server accidentally leaking the key - but then you're back to "single point of failure" if any party accidentally loses the key.
- the last scenario would be server-side keys that could cause massive problems if they leaked. But I don't understand at all who should be the other parties here. Also how would this be better than HSMs?
Oblivious transfer - party A creates two random values (x_0 and x_1) and sends them _both_ to party B.
Party B picks one and uses that to compute future values that are sent back to party A _but without telling party A which of the two values they picked_.
In this example I'm hand-wavey because the production math is complicated and confusing - I took a vastly simplified approach that still works functionally for the demonstration without fully implementing the OT protocol.
> what happens if an owner drops out or you want to introduce a new one? what happens if you want to change the quorum?
In either of those scenarios, assuming you still have quorum, you can regenerate keyshares for the new group for the same public key (and underlying yet unknown private key) by re-running the ceremony with the new participants. Production implementations of the protocol fully flesh this out.
> traditional authentication ...
I wouldn't use TSS in that setup. Traditional auth + MFA is more than adequate. The better use case would be where you have a group that needs to demonstrate consensus (like governance for a programming language, multiple parties involved in signing an application release, or even an HOA that needs to vote on policies). In all of these, you'd take an M of N approach (rather than the simplified 2 of 2) for achieving quorum.
Ah, that makes a lot more sense. Thanks for the additional explanations!
Yeah, AI blogs are close to worthless. It’s a circular feed of slop for LLM’s to be trained on. If I can just talk to the LLM to get the same content, I don’t want to be directly reading it.
What I want to read is well-researched and deeply considered pieces that do a good job explaining concepts in a fresh way and help me learn something new. Sure, use AI to help get there, but if you haven’t done much research or haven’t thought about it yourself beyond the prompts… I don’t want to read it
Even just on a theoretical level I am not really sure the use case of this system. For most keys like ssl certs, this is just too impractical. For anything that has significant business value (like the iOS signing key), I don't think any business would give up all control of such a key to the whims of 3 out of 5 people.
It's to protect against the whims of a small set of people.
If one person holds the signing key to do something critical in your system, they're both a single point of failure and a huge security risk all in one. If you distribute that key to, say, 5 different people you've mitigated the single point of failure. But now you have 5 folks who can act potentially unilaterally.
Using a 3 of 5 TSS setup, you've still mitigated the single point of failure (any one or even two folks can go offline and you can still operate) while also protecting against unilateral action. It's a mathematically-enforced version of the "two-man rule." Similar to the way Cloudflare's Red October tool used to work by splitting things between parties: https://blog.cloudflare.com/red-october-cloudflares-open-sou...
The entire point of this is that the complexity is encapsulated on the signing side - not the verifier. So it's more that you would split the keys between systems you control - say the reverse proxy and the application server.
Or one that's checked into your version control (representing that it is your company's code that's running) and one that lives on the server (representing that it is a server your company controls).
Or to take your example - a key in the repo, a key from the dev, and a key from the build server.
As opposed to the whim of one person?
What secret is controlled by one person? That's just not how businesses manage secrets.
You are not consistent here. When talking about only needing a single signing key you say that is not subject to the whim of one person. When discussing an N-out-of-M scheme, you think that it's just down to the whims of whoever is in that group. That's just not how business manages secrets!
The article does touch on HSMs but might be missing the point of them?
> A compromised server no longer means a compromised key
Proper use of an HSM means that even the owner of the private key is not allowed to access it. You sign your messages within the secure context of the HSM. The key never leaves. It cannot become compromised if the system is configured correctly.
You're correct there that proper use means even the owner can't access it. But in a single key scenario they can still act unilaterally. The advantage of TSS is the removal of that level of unilateral action.
You can't get the private key but you can sign with it, which is still plenty bad.
The private key should be tightly scoped to its context of use. I would definitely agree with you if it's one key that rules the entire kingdom.
Not sure I follow? Lets say it is limited to one use only, sign an app.
Since I've got control of the box I can now use it to sign any app. Isn't that bad enough?
Again and again, we've seen that HSMs aren't secure against physical access of the device.
Can you point me to an example of a FIPS level 3+ certified device having its private keys compromised due to a defeat of the tamper resistant boundary?
> The Core Idea
> Enter X
> How It Works (Without the PhD)
> Why Y Should Care
...and an incredibly handwavy shallow explanation of why this actually works ("Through a clever sequence of oblivious transfers and what’s called multiplicative-to-additive share conversion, they each compute a partial signature.")
I don't get it. If you want a blog, write a blog. If you don't want a blog, don't write a blog. But why use an LLM to create a slopblog? It just wastes EVERYONE's time and energy. How disappointing.
Not sure if it's AI slop yet, but I also found the core part (the "oblivious transfers") to be explained too handwavy to really understand the properties of this system. I don't want to know all the mathematical details, but I do want to understand who is exchanging what data with whom. "oblivious transfer" doesn't tell me anything here.
The other (maybe more interesting) question is how this tech would be deployed. So ok, we have a system, where something can only be signed/decrypted/encrypted/etc if several parties are in agreement. Who should the parties be? How is the threshold itself actually managed?
OP also seems to drift between different usage scenarios here:
- some sort of collectively owned good (like the DAO or resources in a cooperative?) - seems straightforward on a technical level (every owner has a partial key) but also a niche usecase and quite inflexible: What happens if an owner drops out or you want to introduce a new one? What happens if you want to change the quorum?
- traditional authentication of individual users against a server, in a federated setup like the fediverse: Seems like the most practical usecase: One party is the user, the other is the server, the verifying party would be other servers of the network. But then you have to pick your poison by how you set the quorum: Either the quorum is "any party can decrypt the data", at which point you're not better than normal password auth; or "both parties are needed", which would protect against the user or the server accidentally leaking the key - but then you're back to "single point of failure" if any party accidentally loses the key.
- the last scenario would be server-side keys that could cause massive problems if they leaked. But I don't understand at all who should be the other parties here. Also how would this be better than HSMs?
Oblivious transfer - party A creates two random values (x_0 and x_1) and sends them _both_ to party B.
Party B picks one and uses that to compute future values that are sent back to party A _but without telling party A which of the two values they picked_.
In this example I'm hand-wavey because the production math is complicated and confusing - I took a vastly simplified approach that still works functionally for the demonstration without fully implementing the OT protocol.
> what happens if an owner drops out or you want to introduce a new one? what happens if you want to change the quorum?
In either of those scenarios, assuming you still have quorum, you can regenerate keyshares for the new group for the same public key (and underlying yet unknown private key) by re-running the ceremony with the new participants. Production implementations of the protocol fully flesh this out.
> traditional authentication ...
I wouldn't use TSS in that setup. Traditional auth + MFA is more than adequate. The better use case would be where you have a group that needs to demonstrate consensus (like governance for a programming language, multiple parties involved in signing an application release, or even an HOA that needs to vote on policies). In all of these, you'd take an M of N approach (rather than the simplified 2 of 2) for achieving quorum.
Ah, that makes a lot more sense. Thanks for the additional explanations!
Yeah, AI blogs are close to worthless. It’s a circular feed of slop for LLM’s to be trained on. If I can just talk to the LLM to get the same content, I don’t want to be directly reading it.
What I want to read is well-researched and deeply considered pieces that do a good job explaining concepts in a fresh way and help me learn something new. Sure, use AI to help get there, but if you haven’t done much research or haven’t thought about it yourself beyond the prompts… I don’t want to read it
Even just on a theoretical level I am not really sure the use case of this system. For most keys like ssl certs, this is just too impractical. For anything that has significant business value (like the iOS signing key), I don't think any business would give up all control of such a key to the whims of 3 out of 5 people.
It's to protect against the whims of a small set of people.
If one person holds the signing key to do something critical in your system, they're both a single point of failure and a huge security risk all in one. If you distribute that key to, say, 5 different people you've mitigated the single point of failure. But now you have 5 folks who can act potentially unilaterally.
Using a 3 of 5 TSS setup, you've still mitigated the single point of failure (any one or even two folks can go offline and you can still operate) while also protecting against unilateral action. It's a mathematically-enforced version of the "two-man rule." Similar to the way Cloudflare's Red October tool used to work by splitting things between parties: https://blog.cloudflare.com/red-october-cloudflares-open-sou...
The entire point of this is that the complexity is encapsulated on the signing side - not the verifier. So it's more that you would split the keys between systems you control - say the reverse proxy and the application server.
Or one that's checked into your version control (representing that it is your company's code that's running) and one that lives on the server (representing that it is a server your company controls).
Or to take your example - a key in the repo, a key from the dev, and a key from the build server.
As opposed to the whim of one person?
What secret is controlled by one person? That's just not how businesses manage secrets.
You are not consistent here. When talking about only needing a single signing key you say that is not subject to the whim of one person. When discussing an N-out-of-M scheme, you think that it's just down to the whims of whoever is in that group. That's just not how business manages secrets!
The article does touch on HSMs but might be missing the point of them?
> A compromised server no longer means a compromised key
Proper use of an HSM means that even the owner of the private key is not allowed to access it. You sign your messages within the secure context of the HSM. The key never leaves. It cannot become compromised if the system is configured correctly.
You're correct there that proper use means even the owner can't access it. But in a single key scenario they can still act unilaterally. The advantage of TSS is the removal of that level of unilateral action.
You can't get the private key but you can sign with it, which is still plenty bad.
The private key should be tightly scoped to its context of use. I would definitely agree with you if it's one key that rules the entire kingdom.
Not sure I follow? Lets say it is limited to one use only, sign an app.
Since I've got control of the box I can now use it to sign any app. Isn't that bad enough?
Again and again, we've seen that HSMs aren't secure against physical access of the device.
Can you point me to an example of a FIPS level 3+ certified device having its private keys compromised due to a defeat of the tamper resistant boundary?
No, if an HSM is compromised everything is lost.