Centril: Centril: it doesn’t strike me as very large change either at the compiler level proviso that const generics already exists

Parsing, type checking etc. isn’t that difficult but for layout and backend purposes, strange-sized integers are almost but not quite entirely unlike {u,i}{8,16,32,128} (more on this below).

Centril: Centril: and neither is it a big addition complexity-budget wise

Probably not, but it is conceptually and philosophically a departure from the “we offer a selected set of numeric primitives that match things many CPUs can operate on” perspective which probably doesn’t describe everyone’s opinion on primitive types but is, today, a valid perspective.

Centril: Centril: The rationale for “bool remain entirely separate types” is not discussed. It might be the right choice… but there are no coherence problems here, so we could make u1 and bool the same types nominally. What problems do you see with that approach?

There are quite a few things you can do with integers but not with bools (arithmetic) and vice versa (branching, boolean operators). So making this work would require not just special-casing 1-bit integers to extend its capabilities, but also arbitrarily restricting it in other ways compared to all other integer type. Such arbitrary differences based on parameter values is pretty bad for generic code (the poster child is C++'s std::vector<bool> ).

Also, it invites the bikeshed of whether booleans should be signed or unsigned

Centril: Centril: The note that “For the moment, a monomorphisation error will occur if N > 128 , to minimise implementation burden.” does not square with how Rust avoids monomorphisation errors

Funnily, we do have monomorphization-time errors due to type sizes.

Centril: Centril: The restriction 0 <= N <= 128 should be explained. Why is it technically necessary? It seems to complicate things instead…?

As mentioned above, code generation for arbitrarily large integers is a challenge. Loading, storing, bitwise operations and other things that can be broken down into independent byte-sized chunks is conceptually simple (but someone still has to write the code to do that!). Arithmetic, however, can’t be decomposed that simply.

For smaller types, you could promote to the next largest normal integer size and truncate the result (but again, this too has to be implemented) but a 1024 x 1024 multiply or remainder or float<->int cast is highly non-trivial. With {i,u}128 we could mostly lean on LLVM and compiler-rt but we still had to deal with backends not handling it properly and ultimately rewrote the compiler-rt routines in Rust. For types beyond 128 bit, we’ll be entirely on our own. We probably can’t even take code from bignum libraries since most/all? of those either allocate heap memory or assume the results fit in a pre-allocated buffer.

Well… there actually is a “fixed size bignum” implementation in libcore, but it’s been written for float<->decimal and is correspondibly specialized. Not to mention that lowering “primitive” operations to calls to language items is a whole new can of worms for various reasons. though one we’ll probably have to eventually face eventually.