But every time I suggest to a team I’m working on that we should try modelling the problem with it… people are less than enthusiastic.
These are all great tips. Especially starting with the most simple, abstract model and extending it with refinement if you need to.
That basically means that you write your first spec as abstractly and simply as possible with as little detail about the actual implementation as you can.
If details matter, you use the first spec as a module and write a new spec with the added details. The new spec will use implication that if this spec holds then the first one holds too.
Anyway, I was kind of mad when I learned TLA+ that it and systems like it aren’t part of the standard curriculum and common practices.
“We built this distributed database that guarantees quorum if you have at least 3 nodes!” someone will excitedly claim. And when you ask how you get an informal paper and 100k lines of C++ code. Wouldn’t it be nicer if you just had 1k lines of plain old pure maths?
And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
I find the same. Even those who are interested in it in theory hit a pretty unforgiving wall when they try to put it in practice. Learning TLA+ is way harder than leaning another programming language. I failed repeatedly while trying to "program" via PlusCal. To use TLA you have to (re)learn some high-school math and you have to learn to use that math to think abstractly. It takes time and a lot (a lot!) of effort.
Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^
I think it's going to be a learn these tools or fall behind thing in the age of AI.
I agree with everything you've said, and about refinements in particular. TLA+ newcomers often think of refinement as an advanced tool, but it's one of the most pragmatic and effective ways to specify.
But I'd like to add some nuance to this:
> And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
1. A run of a model checker is exactly as perfect as a proof for the checked instance (which, indeed, is a weaker theorem than one that extends to an unbounded set of instances). In fact, it is a proof of the theorem when applied to the instance.
2. A proof is also not perfect for real software. A proof can perfectly convince us of a correctness of an algorithm, which is a mathematical construct, but a software system is not an algorithm; it's ultimately a physical system of silicone and metal elements that carry charge, and a physical system, unlike an algorithm, cannot be proven correct for the simple reason it's not a mathematical object. Virtually all correctness proofs assume that the hardware behaves correctly, but it only behaves correctly with high probability. My point is only that, unlike for algorithms or abstract theorems, there can be no perfect correctness in physical systems. Increasing the confidence in the algorithm beyond the confidence in the hardware (or human operators) may not actually increase the confidence in the system.
> TLA+ is a formal specification language developed by Leslie Lamport. It is used for designing, modelling, documentation, and verification of programs, especially concurrent systems and distributed systems. TLA+ is considered to be exhaustively-testable pseudocode, and its use likened to drawing blueprints for software systems;
In theory (according to Wikipedia at least) the acronym stands for "Temporal Logic for Actions" ... but the pun in the standard meaning of the acronym TLA (i.e. "Three Letter Acronym") is far too enticing for me to believe the choice was fully accidental. :D
1. Eventual consistency: if no new edits are generated, then eventually all connected viewers see the same document.
2. Durability: if the system acknowledges an edit, then that edit is stored permanently in the undo/redo chain of a document.
3. Causal consistency: if the system acknowledges an edit B that depends (for some definition of depend) on edit A, then it must have acknowledged A (instead of e.g. throwing away A due to a conflict and then acknowledging B).
4. Eventual connection: if, after a certain point, the user never fails any part of the handshake process, eventually the user can successfully connect to the document (there are definitely bugs in collaborative tools where users end up never able to connect to a document even with no problems in the connection)
5. Connection is idempotent: Connecting once vs connecting n times has the same result (ensuring e.g. that the process of connecting doesn't corrupt the document)
6. Security properties hold: any user who doesn't have the permissions to view a document is always denied access to the document (because there are sometimes security bugs where an unforeseen set of steps can actually lead to viewing the doc)
6. Order preservation of edits: for any user, even a user with intermittent connection problems, the document they see always has an ordered subset of the edits that user has made (i.e. the user never sees their edits applied out of order)
7. Data structure invariants hold: these documents are ultimately backed by data structures, sometimes complex ones that require certain balancing properties to be true. Make sure that those hold under all edits of the document.
Etc. There's probably dozens of properties at least you could write and check even for an abstract Google Doc-like system (to say nothing of the particulars of a specific implementation). That's not to say you have to write or verify all of these properties! Even just choosing one or two can give a huge boost in reliability confidence.
TCP already provides a model for that one [0] and you can also use logical clocks [1], which is a more generic concept. Now that the whole reliability is out of the way and you're already using a queue for the ordering of the edits, then all you need to care is about the collaborative aspects.
We never enter error state as long as XYZ properties are satisfied, I guess... I will admit, this is probably a naive take, I'm currently trying to go through the TLA+ videos and have trouble wrapping my head around it.
The times I've used TLA+ in anger have been generally related to lower-level "communicating state machines". One was a LoRaWAN handshake with the possibility of lost radio packets, another was a slightly-higher-level Bluetooth message exchange between a phone and a device.
In the first situation I wanted to ensure that the state machine I'd put together couldn't ever deadlock. There's a fairness constraint you can turn in on TLA+ that basically says "at some point in every timeline it will choose every path instead of just looping on a specific failure path". The model I put together ensured that, assuming at some point there was a successful transmission and reception of packets, the state machines on both sides could handle arbitrary packet loss without getting into a bad state indefinitely.
On the Bluetooth side it was the converse: a vendor had built the protocol and I believed that it was possible to undetectably lose messages based on their protocol. The model I put together was quite simple and I used it to give me the sequence of events that would cause a message to be lost; I then wrapped that sequence of events up in an email to the vendor to explain a minimal test case to prove that the protocol was broken.
For your specific example, I'd suggest you think about invariants instead of actions. I'd probably split your example into two distinct models though: one for the connection retry behaviour and one for the queued edits.
The invariant for the first model would be pretty straightforward: starting from the disconnected state, ensure that all (fair) paths eventually lead to the connected state. The fairness part is necessary because without it you can end up in an obvious loop of AttemptConnect -> Fail -> AttemptConnect -> Fail... that never ends up connected. A riff on that that gets a little more interesting was a model I put together for an iOS app that was talking to an Elixir Phoenix backend over Phoenix Channels. Not only did it need to end up with a connected websocket but it also needed to successfully join a channel. There were a few more steps to it and the model ended up being somewhat interesting.
The second model in your example is the edits model. What are the invariants here? We completely ignore the websocket part of it and just model it as two (or three!) state machines with a queue in between them. The websocket model handles the reconnect logic. The key invariant here is that edits aren't lost. In the two-state-machine model you're modelling a client that's providing edits and a server that's maintaining the overall state. An obvious invariant is that the server's state eventually reflects the client's queued edits.
The three-state-machine model is where you're going to get more value out of it. Now you've got TWO clients making edits, potentially on stale copies of the document. How do you handle conflicts? There's going to be a whole sequence of events like "client 1 receives document v123", "client 2 receives document v123", "client 2 makes an edit to v123 and sends to server", "server accepts edit and returns v124", "client 1 makes an edit to v123", "server does...?!"
I haven't reviewed this model at all but a quick-ish google search found it. This is likely somewhat close to how Google Docs works under the hood: https://github.com/JYwellin/CRDT-TLA
The reason is obvious - LLMs got good at writing them, initial cost to write went down 99%, toil to keep them up to date with code went down 99% and some people noticed. Value proposition went from ‘you must be joking unless we’re AWS and you’re proposing an IAM code change’ to ‘plan mode may also emit TLA+ spec as an intermediate implementation artifact’.
Just like with code, the issue was never about writing a TLA+ spec. It was about writing a good TLA+ spec. Anyone can press a piano's key, but not everyone can write the Moonlight Sonata.
Yes, but the accessibility of TLA+, if you come from traditional programming languages, is quite harsh. I have 2 books about it and keep coming back to it every once in a while, but every single time there is a lot of investment into the exact syntax of certain things. Better tooling like AI autocomplete is a huge step there. I can still think about the spec, but not get stuck every single time on minor details.
The devil is in the details. That's the whole point of formalism. There should be no ambiguity meaning what's written is what you want to be written. If you generate and can't prove what's generated is what you intended, the whole point of even writing it is moot.
Yes, and no. Relevant details. If the formalism bogs you down in tangential or irrelevant stuff, it means it's a poor fit.
But I don't think that's the case here. Most programmers don't have much experience with specs, and much less experience with formalized specs, especially if they come from an imperative background. I think those from declarative backgrounds will be much more at home with TLA+. The imperative style causes people to obsess over the "how", while the declarative style focuses more on the "what", and specs are more of a matter of "what" than "how".
A lot of declarative languages do a lot of hard work designing an imperative kernel that will support the declarative construct. It just takes skill to separate the two together, just like it takes skill to split code between states, interfaces and implementations.
> if you come from traditional programming languages, is quite harsh
This may be the result of failing to adequately distinguish between the spec and the implementation. PlusCal, as I understand it, is an attempt to bridge this gap for programmers who don't yet have a good grasp of this distinction. So where in TLA+ you would model the observable behavior of a system in terms of transitions in a state machine, PlusCal gives users a language that looks more like implementation. (Frankly, I find the PlusCal approach more distracting and opaque, but I'm also a more abstract thinker than most.)
It's interesting btw that Martin Kleppmann lists a couple of proof assistants, but not model checkers like TLA+. With him working extensively on distributed systems, I would have thought that would be on the list as well.
But every time I suggest to a team I’m working on that we should try modelling the problem with it… people are less than enthusiastic.
These are all great tips. Especially starting with the most simple, abstract model and extending it with refinement if you need to.
That basically means that you write your first spec as abstractly and simply as possible with as little detail about the actual implementation as you can.
If details matter, you use the first spec as a module and write a new spec with the added details. The new spec will use implication that if this spec holds then the first one holds too.
Anyway, I was kind of mad when I learned TLA+ that it and systems like it aren’t part of the standard curriculum and common practices.
“We built this distributed database that guarantees quorum if you have at least 3 nodes!” someone will excitedly claim. And when you ask how you get an informal paper and 100k lines of C++ code. Wouldn’t it be nicer if you just had 1k lines of plain old pure maths?
And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
For me it’s nicer I guess.
Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^
I think it's going to be a learn these tools or fall behind thing in the age of AI.
But I'd like to add some nuance to this:
> And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
1. A run of a model checker is exactly as perfect as a proof for the checked instance (which, indeed, is a weaker theorem than one that extends to an unbounded set of instances). In fact, it is a proof of the theorem when applied to the instance.
2. A proof is also not perfect for real software. A proof can perfectly convince us of a correctness of an algorithm, which is a mathematical construct, but a software system is not an algorithm; it's ultimately a physical system of silicone and metal elements that carry charge, and a physical system, unlike an algorithm, cannot be proven correct for the simple reason it's not a mathematical object. Virtually all correctness proofs assume that the hardware behaves correctly, but it only behaves correctly with high probability. My point is only that, unlike for algorithms or abstract theorems, there can be no perfect correctness in physical systems. Increasing the confidence in the algorithm beyond the confidence in the hardware (or human operators) may not actually increase the confidence in the system.
[1] https://www.youtube.com/watch?v=qs_mmezrOWs
[2] https://speakerdeck.com/swlaschin/tla-plus-for-programmers
https://en.wikipedia.org/wiki/TLA+
Take something like Google Docs. I'm sure they have some websocket protocol to handle collaborative text editing:
- establish connection
- do a few retries if connection is lost
- always accept edits even when there is no connection
- send queued edits when connection is back
- etc
But what properties can you imagine here?
We never enter an error state? Not really. Error states are unavoidable / outside the control of the system.
We always recover from errors? Not really. Connection might never come back. Or the error requires action from the user (e.g. authentication issues).
1. Eventual consistency: if no new edits are generated, then eventually all connected viewers see the same document.
2. Durability: if the system acknowledges an edit, then that edit is stored permanently in the undo/redo chain of a document.
3. Causal consistency: if the system acknowledges an edit B that depends (for some definition of depend) on edit A, then it must have acknowledged A (instead of e.g. throwing away A due to a conflict and then acknowledging B).
4. Eventual connection: if, after a certain point, the user never fails any part of the handshake process, eventually the user can successfully connect to the document (there are definitely bugs in collaborative tools where users end up never able to connect to a document even with no problems in the connection)
5. Connection is idempotent: Connecting once vs connecting n times has the same result (ensuring e.g. that the process of connecting doesn't corrupt the document)
6. Security properties hold: any user who doesn't have the permissions to view a document is always denied access to the document (because there are sometimes security bugs where an unforeseen set of steps can actually lead to viewing the doc)
6. Order preservation of edits: for any user, even a user with intermittent connection problems, the document they see always has an ordered subset of the edits that user has made (i.e. the user never sees their edits applied out of order)
7. Data structure invariants hold: these documents are ultimately backed by data structures, sometimes complex ones that require certain balancing properties to be true. Make sure that those hold under all edits of the document.
Etc. There's probably dozens of properties at least you could write and check even for an abstract Google Doc-like system (to say nothing of the particulars of a specific implementation). That's not to say you have to write or verify all of these properties! Even just choosing one or two can give a huge boost in reliability confidence.
[0]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#...
[1]: https://dl.acm.org/doi/abs/10.1145/359545.359563
In the first situation I wanted to ensure that the state machine I'd put together couldn't ever deadlock. There's a fairness constraint you can turn in on TLA+ that basically says "at some point in every timeline it will choose every path instead of just looping on a specific failure path". The model I put together ensured that, assuming at some point there was a successful transmission and reception of packets, the state machines on both sides could handle arbitrary packet loss without getting into a bad state indefinitely.
On the Bluetooth side it was the converse: a vendor had built the protocol and I believed that it was possible to undetectably lose messages based on their protocol. The model I put together was quite simple and I used it to give me the sequence of events that would cause a message to be lost; I then wrapped that sequence of events up in an email to the vendor to explain a minimal test case to prove that the protocol was broken.
For your specific example, I'd suggest you think about invariants instead of actions. I'd probably split your example into two distinct models though: one for the connection retry behaviour and one for the queued edits.
The invariant for the first model would be pretty straightforward: starting from the disconnected state, ensure that all (fair) paths eventually lead to the connected state. The fairness part is necessary because without it you can end up in an obvious loop of AttemptConnect -> Fail -> AttemptConnect -> Fail... that never ends up connected. A riff on that that gets a little more interesting was a model I put together for an iOS app that was talking to an Elixir Phoenix backend over Phoenix Channels. Not only did it need to end up with a connected websocket but it also needed to successfully join a channel. There were a few more steps to it and the model ended up being somewhat interesting.
The second model in your example is the edits model. What are the invariants here? We completely ignore the websocket part of it and just model it as two (or three!) state machines with a queue in between them. The websocket model handles the reconnect logic. The key invariant here is that edits aren't lost. In the two-state-machine model you're modelling a client that's providing edits and a server that's maintaining the overall state. An obvious invariant is that the server's state eventually reflects the client's queued edits.
The three-state-machine model is where you're going to get more value out of it. Now you've got TWO clients making edits, potentially on stale copies of the document. How do you handle conflicts? There's going to be a whole sequence of events like "client 1 receives document v123", "client 2 receives document v123", "client 2 makes an edit to v123 and sends to server", "server accepts edit and returns v124", "client 1 makes an edit to v123", "server does...?!"
I haven't reviewed this model at all but a quick-ish google search found it. This is likely somewhat close to how Google Docs works under the hood: https://github.com/JYwellin/CRDT-TLA
But I don't think that's the case here. Most programmers don't have much experience with specs, and much less experience with formalized specs, especially if they come from an imperative background. I think those from declarative backgrounds will be much more at home with TLA+. The imperative style causes people to obsess over the "how", while the declarative style focuses more on the "what", and specs are more of a matter of "what" than "how".
https://www.rfc-editor.org/rfc/rfc5321.html#section-3.1
A lot of declarative languages do a lot of hard work designing an imperative kernel that will support the declarative construct. It just takes skill to separate the two together, just like it takes skill to split code between states, interfaces and implementations.
This may be the result of failing to adequately distinguish between the spec and the implementation. PlusCal, as I understand it, is an attempt to bridge this gap for programmers who don't yet have a good grasp of this distinction. So where in TLA+ you would model the observable behavior of a system in terms of transitions in a state machine, PlusCal gives users a language that looks more like implementation. (Frankly, I find the PlusCal approach more distracting and opaque, but I'm also a more abstract thinker than most.)
The second story, and highly upvoted, on HN right now is: "AI will make formal verification go mainstream"[1].
[1] https://news.ycombinator.com/item?id=46294574