[rust-dev] Why focus on single-consumer message passing?

The language documentation currently takes a very opinionated view on concurrency. It focuses on message passing and at times makes the claim that Rust does not have shared memory between tasks. I don't think the language should be taking a position like this but rather providing useful tools to implement a concurrent application as the developer sees fit. The library should be offering the `Arc` and `MutexArc` types in `libstd` along with other useful concurrent data structures. A concurrent hash table split into shards is a very scalable primitive and quite trivial to implement. There's no reason to encode keyed inserts/searches/removals with message passing when it's faster and easier to do it directly. In my opinion, the most prominent message passing tool should be a multiple-consumer/multiple-producer queue without API sacrifices made at the performance altar. The lack of a single-consumer restriction means that a split between senders and receivers can be implemented as a policy, but is no more necessary than a `Stack<T>` wrapper around vectors. /// Return a new `Queue` instance, holding at most `maximum` elements. fn new(maximum: uint) -> Queue<T>; /// Pop a value from the front of the queue, blocking until the queue is not empty. fn pop(&self) -> T; /// Pop a value from the front of the queue, or return `None` if the queue is empty. fn try_pop(&self) -> Option<T>; /// Pop a value from the front of the queue, blocking until the queue is not empty or the /// timeout expires. fn pop_timeout(&self, reltime: Time) -> Option<T>; /// Push a value to the back of the queue, blocking until the queue is not full. fn push(&self, item: T); /// Push a value to the back of the queue, or return `Some(item)` if the queue is full. fn try_push(&self, item: T) -> Option<T>; /// Push a value to the back of the queue, blocking until the queue is not full or the timeout /// expires. If the timeout expires, return `Some(item)`. fn push_timeout(&self, item: T, reltime: Time) -> Option<T>; The standard library can then expose more restricted variants for the sake of optimization. A purely wait-free queue with the capacity allocated up-front is obviously useful. The single consumer restriction may be useful, but the current implementation does not present a performance advantage over a less restricted API. Supporting selection over multiple queues would involve using kqueue on FreeBSD/OSX and eventfd/epoll on Linux instead of a condition variable for the not empty condition. For Windows, the regular condition variables will work fine. This does have a cost, and may not make sense with the same type.