An understanding of READ_ONCE() and WRITE_ONCE() is important for kernel developers who will be dealing with any sort of concurrent access to data. So, naturally, they are almost entirely absent from the kernel's documentation.
/*
* Yes, this permits 64-bit accesses on 32-bit architectures. These will
* actually be atomic in some cases (namely Armv7 + LPAE), but for others we
* rely on the access being split into 2x32-bit accesses for a 32-bit quantity
* (e.g. a virtual address) and a strong prevailing wind.
*/
> There are a couple of interesting implications from this outcome, should it hold. The first of those is that, as Rust code reaches more deeply into the core kernel, its code for concurrent access to shared data will look significantly different from the equivalent C code, even though the code on both sides may be working with the same data. Understanding lockless data access is challenging enough when dealing with one API; developers may now have to understand two APIs, which will not make the task easier.
The thing is, it'll be far less challenging for the Rust code, which will actually define the ordering semantics explicitly. That's the point of rejecting the READ_ONCE/WRITE_ONCE approach - it's unclear what the goal is when using those, what guarantee you actually want.
I suspect that if Rust continues forward with this approach it will basically end up as the code where someone goes to read the actual semantics to determine what the C code should do.
> I suspect that if Rust continues forward with this approach it will basically end up as the code where someone goes to read the actual semantics to determine what the C code should do.
That will also put it on the unfortunate position of being the place that breaks every time somebody adds a bug to the C code.
Anyway, given the cultures involved, it's probably inevitable.
Very interesting. AFAIK the kernel explicitly gives consume semantics to read_once (and in fact it is not just a compiler barrier on alpha), so technically lowering it to a relaxed operation is wrong.
Does rust have or need the equivalent of std::memory_order_consume? Famously this was deemed unimplementable in C++.
right, so I would expect that the equivalent of READ_ONCE is converted to an acquire in rust, even if slightly pessimal.
But the article says that the suggestion is to convert them to relaxed loads. Is the expectation to YOLO it and hope that the compiler doesn't break control and data dependencies?
It is cheaper on ARM and POWER. But I'm not sure it is always safe. The standard has very complex rules for consume to make sure that the compiler didn't break the dependencies.
edit: and those rules where so complex that compilers decided where not implementable or not worth it.
The rules were there to explain what optimizations remained possible. Here no optimization is possible at the compiler level, and only the processor retains freedom because we know it won't use it.
It is nasty, but it's very similar to how Linux does it (volatile read + __asm__("") compiler barrier).
It's a persistent misunderstanding that release-consume is about Alpha. It's not; in fact, Alpha is one of the few architectures where release-consume doesn't help.
In a TSO architecture like x86 or SPARC, every "regular" memory load/store is effectively a release/acquire by default. Using release/consume or relaxed provides no extra speedup on these architectures. In weak memory models, you need to add in acquire barriers to get release/acquire architectures. But also, most weak memory models have a basic rule that a data-dependent load has an implicit ordering dependency on the values that computed it (most notably, loading *p has an implicit dependency on p).
The goal of release/consume is to be able to avoid having an acquire fence if you have only those dependencies--to promote a hardware data dependency semantic rule to a language-level semantic rule. For Alpha's ultra-weak model, you still need the acquire fence in this mode, it doesn't help Alpha one whit. Unfortunately, for various reasons, no one has been able to work out a language-level semantics for consume that compilers are willing to implement (preserving data dependencies through optimizations is a lot more difficult than it appears), so all compilers have remapped consume to acquire, making it useless.
consume is trivial on alpha, it is the same as acquire (always needs a #LoadLoad). It is also the same as acquire (and relaxed) on x86 and SPARC (a plain load, #LoadLoad is always implied).
The only place where consume matters is on relaxed but not too relaxed architectures like ARM and POWER, where consume relies on the implicit #LoadLoad of controls and data dependencies.
Not sure if it introduces a tiered experience or not. But reading the article it appears that the Rust devs advocated for an api that is clearer in it's semantics with the tradeoff that now understanding how it interacts with C code requires understanding two APIs. How this shakes out in practice remains to be seen.
That is my understanding from the outside as well. The core question here should, I think, be whether the adoption and spread of clearer semantics via Rust is worth the potential for confusion and misunderstandings at the boundaries between C and Rust. From the article it appears that this specific instance actually resulted in identifying issues in the usage of the C api's here that are geting scrutiny and fixes as a result. That would indicate the introduction of Rust is causing the trend line to go in the correct direction in at least this instance.
That's been largely my experience of RIIR over years of work in numerous contexts: attempting to encode invariants in the type system results in identifying semantic issues. over and over.
edit to add: and I'm not talking about compilation failures so much as design problems. when the meaning of a value is overloaded, or when there's a "you must do Y after X and never before" and then you can't write equivalent code in all cases, and so on. "but what does this mean?" becomes the question to answer.
The problem with atomic_read and atomic_write is that some people will interpret that as "atomic with a sequentially consistent ordering" and some as "atomic with a relaxed ordering" and everything in between. It's a fine name for a function that takes an argument that specifies memory ordering [1]. It's not great for anything else.
Read_once and Write_once convey that there's more nuance than that, and tries to convey the nuance.
I think “atomic” implies something more than just “once” because for atomic we customarily consider the memory order with that memory access, but “once” just implies reading and writing exactly once. Neither are good names because the kernel developers clearly assumed some kind of atomicity with some kind of memory ordering here but just calling it “atomic” doesn’t convey that.
The thing is, it'll be far less challenging for the Rust code, which will actually define the ordering semantics explicitly. That's the point of rejecting the READ_ONCE/WRITE_ONCE approach - it's unclear what the goal is when using those, what guarantee you actually want.
I suspect that if Rust continues forward with this approach it will basically end up as the code where someone goes to read the actual semantics to determine what the C code should do.
That will also put it on the unfortunate position of being the place that breaks every time somebody adds a bug to the C code.
Anyway, given the cultures involved, it's probably inevitable.
Does rust have or need the equivalent of std::memory_order_consume? Famously this was deemed unimplementable in C++.
But the article says that the suggestion is to convert them to relaxed loads. Is the expectation to YOLO it and hope that the compiler doesn't break control and data dependencies?
edit: and those rules where so complex that compilers decided where not implementable or not worth it.
It is nasty, but it's very similar to how Linux does it (volatile read + __asm__("") compiler barrier).
In a TSO architecture like x86 or SPARC, every "regular" memory load/store is effectively a release/acquire by default. Using release/consume or relaxed provides no extra speedup on these architectures. In weak memory models, you need to add in acquire barriers to get release/acquire architectures. But also, most weak memory models have a basic rule that a data-dependent load has an implicit ordering dependency on the values that computed it (most notably, loading *p has an implicit dependency on p).
The goal of release/consume is to be able to avoid having an acquire fence if you have only those dependencies--to promote a hardware data dependency semantic rule to a language-level semantic rule. For Alpha's ultra-weak model, you still need the acquire fence in this mode, it doesn't help Alpha one whit. Unfortunately, for various reasons, no one has been able to work out a language-level semantics for consume that compilers are willing to implement (preserving data dependencies through optimizations is a lot more difficult than it appears), so all compilers have remapped consume to acquire, making it useless.
The only place where consume matters is on relaxed but not too relaxed architectures like ARM and POWER, where consume relies on the implicit #LoadLoad of controls and data dependencies.
Not knowing anything about development of the kernel, does this kind of thing create a two tier Linux development experience?
edit to add: and I'm not talking about compilation failures so much as design problems. when the meaning of a value is overloaded, or when there's a "you must do Y after X and never before" and then you can't write equivalent code in all cases, and so on. "but what does this mean?" becomes the question to answer.
Read_once and Write_once convey that there's more nuance than that, and tries to convey the nuance.
[1] E.g. in rust anything that takes https://doc.rust-lang.org/std/sync/atomic/enum.Ordering.html