Git Product home page Git Product logo

Comments (13)

RalfJung avatar RalfJung commented on June 22, 2024 2

Consume relies on a notion of "dependency", and there is no credible proposal for how to define "dependency" in a language that is aggressively optimized (other than by introducing some form of provenance on integers, but that's not going to happen). What the hardware does is largely irrelevant here (as is often the case when discussing Rust semantics); whatever semantics we want has to first pass the test of fire of being compatible with all sorts of analyses and optimizations -- and then we can ask whether it can be efficiently mapped to hardware.

Adding "consume" to Rust is definitely out-of-scope for this issue.

In the Rust memory model (basically, C++ without "consume"), there's not really any way to implement a Mutex that doesn't establish synchronization between release and future acquire calls. (In fact, the memory orderings are literally named after these lock operations!) Furthermore, using Mutex<()> to guard data "outside" the Mutex is, to my knowledge, a generally accepted-to-be-correct pattern. So I don't think there is a realistic alternative to adding such a guarantee.

from rust.

chorman0773 avatar chorman0773 commented on June 22, 2024 1

I don't think any other implementation exists that is sound, no.
Consider Mutex<Box<T>>. For it to be sound to access the memory inside the Box, every access from all previous attempts at locking the Mutex must happen-before the next time you can access the Box. This implies either synchronizes-with or dependency ordered before at the very least, otherwise you don't get happens-before when you have two different threads.

Being a fence is an overly restrictive specification, but it saying synchronizes-with is required IMO for the API to be sound.

Edit: Yes, Mutex could probably use dependency ordered before, depending on what exactly the preconditions on Send impls are. However, given consume is a hot mess in C++, to the point where it's outright discouraged as of C++17, I don't think we want to use dependency ordering. It may be nice if, in Rust, happens before=simply happens before.

from rust.

chorman0773 avatar chorman0773 commented on June 22, 2024

I don't think we need to explicitly say "It does an atomic operation".

We can just say

The return from a call to Mutex::lock synchronizes-with a preceeding call to MutexGuard::drop that unlocks that Mutex.

from rust.

briansmith avatar briansmith commented on June 22, 2024

We can just say

The return from a call to Mutex::lock synchronizes-with a preceeding call to MutexGuard::drop that unlocks that Mutex.

That sounds good. However, do we have a place that says that synchronizes-with implies happens-before? I only see it implicitly in https://doc.rust-lang.org/std/sync/atomic/fn.fence.html:

A fence ‘A’ which has (at least) Release ordering semantics, synchronizes with a fence ‘B’ with (at least) Acquire semantics, if and only if there exist operations X and Y, both operating on some atomic object ‘M’ such that A is sequenced before X, Y is sequenced before B and Y observes the change to M. This provides a happens-before dependence between A and B.

Whereas Ordering::Release and Ordering::Acquire basically indirectly say that AtomicX.store(x, Ordering::Release) and AtomicX::load(Ordering::Acquire) establish a happens-before relationship, but without mentioning synchronized-with or happens-before.

It seems like if we want to use the term synchronizes-with for Mutex, then we should change the definition of Release/Acquire to also use that term, and then somewhere give some kind of definition of synchronizes-with that specifically mentions "happens before."

https://preshing.com/20130823/the-synchronizes-with-relation/#other-ways-to-achieve-synchronizes-with and some other places also mention that spawning a thread synchronizes-with (Ordering::Release) the execution of the closure on the new thread (Ordering::Acquire).

I believe std::sync::Once and OnceLock also need similar language; every call to Once::call_once()/OnceLock::get_andinit().OnceLOck::get_or_try_init() synchronizes-with (Ordering::Acquire) the successful (i.e. non-panicking, non-Err result) completion of the initialization function (Ordering::Release). (Note that the documentation for std::sync::Once seems to say something much stronger right now, which is another issue.)

from rust.

veera-sivarajan avatar veera-sivarajan commented on June 22, 2024

@rustbot label -needs-triage +A-atomic

from rust.

chorman0773 avatar chorman0773 commented on June 22, 2024

That sounds good. However, do we have a place that says that synchronizes-with implies happens-before?

Yes, the C++ Standard, [intro.races]#9, [intro.races]#10, and [intro.races]#11 https://eel.is/c++draft/intro.races#9.

from rust.

alexpyattaev avatar alexpyattaev commented on June 22, 2024

Mutex explicitly documented as having what amounts to "side effects" (a.k.a. spooky action at a distance) w.r.t. random atomics does sound somewhat inappropriate for rust in general, especially in environments that might have the machinery to support a mutex, but not necessarily the side-effects that you find on e.g. an ARM implementation of it. Specifically, if you want to enforce a particular ordering w.r.t. a particular atomic, maybe that is what you need to state in the code using an appropriate fence call? Expecting a mutex to implicitly do that for you essentially relies on mutex's internal implementation to work. I might be missing something very important here, of course, but it seems that exposing these internal details into the docs may not be the best idea for future portability of Mutex to newer architectures, unless someone can certify that it is impossible to implement mutex in any other way.

from rust.

alexpyattaev avatar alexpyattaev commented on June 22, 2024

happens before=simply happens before.

That does indeed sound like "the rust way".

However, from what I can see in c++ specs, some of them claim that memory order consume was in fact re-added in c++20, and its former definition was discouraged since c++17. It is all a bit wonky in there, so I'd just ignore the C++ spec entirely for the purpose of this discussion, especially since llvm has no implementation for consume anyway. What truly matters is what CPUs can or can not do in principle.

With respect to what consume ordering was supposed to do, as far as I can see, it was intended to support specifically use-cases such as rust's Mutex. As a former hardware engineer, I can envision mechanisms that would enable hardware-level simplifications in cases when Mutex-protected area is very small (e.g. one cache line), as I could plausibly do less work then in the case of a full "happens before" synchronization. Arguably, the main benefits would be seen on NUMA machines and GPUs, and we should look for what is done e.g. in CUDA to be certain.

On the other hand, I'd imagine this sort of aggressive optimization, even if we find a way to implement consume "correctly", would break horribly around unsafe code. Compiler would need a far better understanding of what depends on what to not mess this up, and unsafe/interior mutability stuff will not make it easy at all.

As a solution to the doc problem, a generic parameter may be provided to std::Mutex that would toggle whether it uses memory ordering acquire-release or acquire-consume, with acquire-release being the default. This way we are explicit about the side-effects that may occur or relations that can be relied upon, and can document it as such. If/when memory order consume becomes a thing, users can switch to that without breaking the world.

from rust.

alexpyattaev avatar alexpyattaev commented on June 22, 2024

from rust.

chorman0773 avatar chorman0773 commented on June 22, 2024

It is all a bit wonky in there, so I'd just ignore the C++ spec entirely for the purpose of this discussion

Considering that is also Rust's Specification (for atomics/Multithreaded Memory Model), I don't think we can.

You make a good point, in this case the docs indeed should acknowledge this. If someone ever implements hardware that can do better, then we bother:)

Hardware does support Consume, fwiw. ARMv8's "default" memory order is dependency ordered, aka consume. However, no optimizing compiler I'm aware of supports consume (opting to promote it to acquire in all cases).

from rust.

alexpyattaev avatar alexpyattaev commented on June 22, 2024

Does anyone have a functional example of a mutex built around consume memory ordering?

from rust.

briansmith avatar briansmith commented on June 22, 2024

Does anyone have a functional example of a mutex built around consume memory ordering?

I think it would be very cool if Rust could get a useful (higher-performing) consume-based Mutex. However, I think much code already exists that expects acquire-release, so without some kind of opt-in (probably a separate type), Mutex needs to have acquire-release semantics by default.

(acquire-release is usually introduced conceptually as working "like acquiring and releasing a mutex" so it would be humorous, but confusing, if Rust's actual Mutex didn't have acquire-release semantics).

from rust.

chorman0773 avatar chorman0773 commented on June 22, 2024

Furthermore, using Mutex<()> to guard data "outside" the Mutex is, to my knowledge, a generally accepted-to-be-correct pattern

I've done this before. In my emulator, I do this in my implementation of physical memory:

#[derive(Debug)]
pub struct SysMemory {
    page_limit: u32,
    // SAFETY:
    // page_count is synchronized by the `pages` lock despite not being directly covered by it.
    // It is sound to perform read accesses to this cell from a thread that owns a shared read lock to `pages` or the exclusive write lock to `pages`
    // It is sound to perform write accesses to this cell from a thread that owns the excusive write lock to `pages`
    page_count: SyncUnsafeCell<u32>,
    pages: RwLock<HashMap<LeU32, RwLock<Box<Page>>>>,
}

(Yes that's a nested RwLock, I don't want reading a page to conflict with writing to an already allocated page)

from rust.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.