Handling Integers with Swift is not the easiest task. Especially when I need the generic function that can do some work on any kind of Integer.

There are 11 Integer types that can be considered as Integers:

Int8

UInt8

Int16

UInt16

Int32

UInt32

Int64

UInt64

Int

UInt

Bit (I'll skip this one here, I did some post about Bit)

most of them conform to an enormous number of protocols, for example this is the protocol hierarchy for Int, 26 protocols.



thankfully, some of them are "empty" protocols, used to describe characteristics of the type for eg. Strideable, ForwardIndexType, RandomAccessIndexType, IntegerType, SignedIntegerType etc..

My goal here is to create generic function that will handle any integer value and make some operation on it. In this case I want to build function that get bytes and return integer value build with the given bytes array of bytes:

let bytes:[Byte] = [0xFF, 0xFF, 0xFF, 0xFF] let result:UInt32? = integerWithBytes(bytes)

function is named integerWithBytes() , and the sample input values are

[0xFF, 0xFF, 0xFF, 0xFF]

4 bytes long value (32 bits) equal to 4294967295, with following bits representation



More or less, what I want to archive can be done with old world NSData, -[NSData getBytes:]:

func integerWithBytes<T: IntegerType>(bytes: [UInt8]) -> T { var i:T var data = NSData(bytes: bytes, length: bytes.count) data.getBytes(&i, length: sizeofValue(i)); retrurn i }

but it's not the case here. I want to have it the Swift way, without raw memory.

Let's play.

Generic function

Let's build a generic function now. From the hierarchy of protocols I found that I could use protocol IntegerType as denominator of integerness of given value.

First, naive version of function with parameter that conforms to protocol IntegerType is as follow:

func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? { if (bytes.count < sizeof(T)) { return nil } var i:T = 0 for (var j = 0; j < sizeof(T); j++) { i = i | (bytes[j] << (j * 8)) // error! } return i }

error: could not find an overload for '<<' that accepts the supplied arguments - because shift operator '<<' is defined for every integer separatly, but not for my generic type T: IntegerType

func <<(lhs: UInt64, rhs: UInt64) -> UInt64 func <<(lhs: Int64, rhs: Int64) -> Int64 func <<(lhs: UInt, rhs: UInt) -> UInt func <<(lhs: Int, rhs: Int) -> Int func <<(lhs: Int32, rhs: Int32) -> Int32 func <<(lhs: UInt32, rhs: UInt32) -> UInt32 func <<(lhs: Int16, rhs: Int16) -> Int16 func <<(lhs: UInt16, rhs: UInt16) -> UInt16 func <<(lhs: Int8, rhs: Int8) -> Int8 func <<(lhs: UInt8, rhs: UInt8) -> UInt8

here is a first sign that probably there is no simple way to build generic function. Every type is represented separately, and the signed integer is separated from the unsigned. According to the returned error message I can't do shift operation '<<' on type described as T: IntegerType... but for my function I need a return value of given type T and it has to be an integer.

Since I know that Int is the largest possible integer (Int32 or Int64), I think I can use it for my bitwise shift operation. The sequence is as follow: cast value at first to Int, then shift, and cast back to given type T. This could look like this:

let i:T = T(Int(bytes[j]) << Int(j * 8)) // error

error: could not find an overload for 'init' that accepts the supplied arguments - Oups, this one is because initializers are not defined by any protocol! I can find them on every single struct: Int, UInt, Int8, UInt8, etc... but not formalized as protocol. I assumed that this is "de facto protocol" and build one named GenericIntegerType

protocol GenericIntegerType: IntegerType { init(_ v: Int) init(_ v: UInt) init(_ v: Int8) init(_ v: UInt8) init(_ v: Int16) init(_ v: UInt16) init(_ v: Int32) init(_ v: UInt32) init(_ v: Int64) init(_ v: UInt64) }

my function looks as follow:

func integerWithBytes<T: GenericIntegerType>(bytes:[UInt8]) -> T? { if (bytes.count < sizeof(T)) { return nil } var i:T = 0 for (var j = 0; j < sizeof(T); j++) { i = i | T(Int(bytes[j]) << Int(j * 8)) // ok } return i }

at that point I thought that problem is solved. I was wrong.

Sign matters

As soon as I started first tests, I found that it doesn't working for some types, see

let result:UInt32? = integerWithBytes(bytes) // ok let result:Int32? = integerWithBytes(bytes) // error let result:UInt8? = integerWithBytes(bytes) // ok let result:UInt64? = integerWithBytes(bytes) // error

obviously there is a room for improvements ;)

First of all... I need separate signed types, and unsigned ones! This is the major drawback. I can't (maybe you can) build a single generic function.

Ok, so now I have two versions, one for unsigned integers:

func integerWithBytes<T: GenericIntegerType where T: UnsignedIntegerType>(bytes:[UInt8]) -> T? { (...) var i:UIntMax = 0 for (var j = 0; j < maxBytes; j++) { i = i | T(bytes[j]).toUIntMax() << UIntMax(j * 8) } (...) }

and one for signed:

func integerWithBytes<T: GenericIntegerType where T: SignedIntegerType>(bytes:[UInt8]) -> T? { (...) var i:IntMax = 0 for (var j = 0; j < maxBytes; j++) { i = i | T(bytes[j]).toIntMax() << (j * 8).toIntMax() } (...) }

I'm good at that point.

It depends on what type is in context, appropriate function is called. I decided to use IntMax and UIntMax here as the largest integers and perform shift operations on these types.

then another issue pops out:

return i // error

error: IntMax is not convertible to 'T' - This one is because starting with Swift 1.2 I have to explicitly cast types with keyword as .

return i as? T // ok

better, but still...

for some types the result is NOT right:

let result:Int32? = integerWithBytes(bytes) // nil

while the expected value for signed integer is "-1". I think I should use the bitPattern: initializer to solve this issue.

This is going to be madness!

Protocols madness

I have come up with two new protocols to handle another de facto protocol for applying bits to value with bitPattern: initializer:

protocol GenericSignedIntegerBitPattern { init(bitPattern: UIntMax) init(truncatingBitPattern: IntMax) } protocol GenericUnsignedIntegerBitPattern { init(truncatingBitPattern: UIntMax) }

then I have to install new protocols on all integers. For some types I have to add support for init(bitPattern: UIntMax) . Notice that Int64 and UInt64 are a bit exceptions to the rules.

extension Int:GenericIntegerType, GenericSignedIntegerBitPattern { init(bitPattern: UIntMax) { self.init(bitPattern: UInt(truncatingBitPattern: bitPattern)) } } extension UInt:GenericIntegerType, GenericUnsignedIntegerBitPattern {} extension Int8:GenericIntegerType, GenericSignedIntegerBitPattern { init(bitPattern: UIntMax) { self.init(bitPattern: UInt8(truncatingBitPattern: bitPattern)) } } extension UInt8:GenericIntegerType, GenericUnsignedIntegerBitPattern {} extension Int16:GenericIntegerType, GenericSignedIntegerBitPattern { init(bitPattern: UIntMax) { self.init(bitPattern: UInt16(truncatingBitPattern: bitPattern)) } } extension UInt16:GenericIntegerType, GenericUnsignedIntegerBitPattern {} extension Int32:GenericIntegerType, GenericSignedIntegerBitPattern { init(bitPattern: UIntMax) { self.init(bitPattern: UInt32(truncatingBitPattern: bitPattern)) } } extension UInt32:GenericIntegerType, GenericUnsignedIntegerBitPattern {} extension Int64:GenericIntegerType, GenericSignedIntegerBitPattern { // init(bitPattern: UInt64) already defined init(truncatingBitPattern: IntMax) { self.init(truncatingBitPattern) } } extension UInt64:GenericIntegerType, GenericUnsignedIntegerBitPattern { // init(bitPattern: Int64) already defined init(truncatingBitPattern: UIntMax) { self.init(truncatingBitPattern) } }

... now I have three new protocols:

GenericIntegerType

GenericUnsignedIntegerBitPattern

GenericSignedIntegerBitPattern

and finally I can build my generic... ekhm... generics, one for unsigned types:

func integerWithBytes<T: GenericIntegerType where T: UnsignedIntegerType, T: GenericUnsignedIntegerBitPattern>(bytes:[UInt8]) -> T? { if (bytes.count < sizeof(T)) { return nil } let maxBytes = sizeof(T) var i:UIntMax = 0 for (var j = 0; j < maxBytes; j++) { i = i | T(bytes[j]).toUIntMax() << UIntMax(j * 8) } return T(truncatingBitPattern: i) }

and one for signed types. Signed version is slightly different. Here I need do some bitPatten casts, that are not necessary with unsigned types:

func integerWithBytes<T: GenericIntegerType where T: SignedIntegerType, T: GenericSignedIntegerBitPattern>(bytes:[UInt8]) -> T? { if (bytes.count < sizeof(T)) { return nil } let maxBytes = sizeof(T) var i:IntMax = 0 for (var j = 0; j < maxBytes; j++) { i = i | T(bitPattern: UIntMax(bytes[j].toUIntMax())).toIntMax() << (j * 8).toIntMax() } return T(truncatingBitPattern: i) }

and finally some tests:

let bytes:[UInt8] = [0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF] integerWithBytes(bytes) as Int8? // -1 integerWithBytes(bytes) as UInt8? // 255 integerWithBytes(bytes) as Int16? // -1 integerWithBytes(bytes) as UInt16? // 65535 integerWithBytes(bytes) as Int32? // -1 integerWithBytes(bytes) as UInt32? // 4294967295 integerWithBytes(bytes) as Int64? // -1 integerWithBytes(bytes) as UInt64? // 18446744073709551615

The code can be found here: https://gist.github.com/krzyzanowskim/c84d039d1542c1a82731

Conclusion

I'm actually surprised that integers are so fragmented. Wouldn't be great if had one common place/protocol for all integers, to rule them all? Madness. Finally I came up with a result I'm not especially proud of. A lot of time wasted for mediocre result, and I just hope I'm missing something here, something that could change this task to easy one.

If you know better way to solve this puzzle, don't hesitate contact me, or comment on gist.

Update: There is an interesting discussion on Hacker News thread.

PS. Cover image 149a Spicy Mystery Stories Apr-1936 Includes Pit of Madness by E. Hoffmann Price