Let’s get an idea of how we’re using the heap. First with a look at the distribution of object sizes, and the number of blocks use for each object size.

Table 1. Heap usage by object size Object size (bytes) No. objects Size of objects No. blocks Size of blocks Utilisation 16 23,254,210 354MB 146,825 573MB 61% 32 747,766 22MB 16,185 63MB 36% 48 4,258,800 194MB 54,243 211MB 92% 64 268,858 16MB 6,070 23MB 69% 80 7,589 592KB 150 600KB 98% 96 18 1KB 2 8KB 21% 112 410,040 43MB 1,1391 44MB 98% 128 2,599 324KB 89 356KB 91% 144 68,513 9MB 2 447 9MB 98% 160 401,600 61MB 16,064 62MB 97% 176 95,011 15MB 4,132 16MB 98% 192 2,520 472KB 120 480KB 98% 208 7,503 1MB 395 1MB 96% 224 504 110KB 28 112KB 98% 240 7,513 1MB 442 1MB 99% 256 2,016 504KB 127 508KB 99% 272 17,010 4MB 1,134 4MB 99% 288 3,010 846KB 215 860KB 98% 320 504 157KB 42 168KB 93% 336 1,500 492KB 125 500KB 98% 352 1,001 344KB 91 364KB 94% 448 501 219KB 57 228KB 96% 512 6 3KB 1 4KB 75% 800 7,053 5MB 1,780 6MB 77% 1,024 1 1KB 1 4KB 25% 1,344 0 0B 1 4KB 0% 2,048 2 4KB 1 4KB 100% 2,064 8 16KB 8 32KB 50% 2,736 1 2KB 1 4KB 66% 3,584 2 7KB 2 8KB 87% 6,160 4 24KB 4 ? ? 8,464 7 57KB 7 ? ?

Small objects (16 and 48 bytes) dominate the heap. Objects 48 bytes and smaller account for 95% of the objects by count, or 77% by total size. 83% of the heap is usable for objects of this size only. This is not very surprising at least for a language like Mercury where many objects might only be a few words long. For example a cons cell is the most common object (heap attribution profile) and is exactly 16 bytes long.

They also have some of the lowest heap utilisation percentages, this is the percentage of used space within blocks for objects of this size. A low percentage indicates that there is a lot of empty space that can only be used by objects of this size. The question marks indicate that I don’t know how BDWGC stores large objects and do not know the sizes of their heap blocks.

Recall that the program attempted to allocate a 160 byte object. The utilisation of the 160 byte area was 97%, this is an artifact of how I collected the data since I’ve coalesced blocks of different types together in these tables. The program crashed when the collector could not find a block of the correct size and type, and there were no empty blocks that it could initialise for this size and type.

Well, that’s the explanation of why the program crashed, a bit dull. The more interesting story here is what is going on with the small object sizes. We can’t allocate 160 bytes and yet there’s 288MB free, 277MB of which is reserved for objects 48 bytes or smaller! What’s going on with the spare memory in those smaller object blocks?

Block usage distribution for 16 byte objects Block usage distribution for 32 byte objects Block usage distribution for 48 byte objects

The first thing to note about these histograms is that I’ve had to scale the graph: For example there are 78,355 blocks with 256 objects in the first graph, you can see a thin bar on the far right of the graph, it extends far beyond the top boarder and scaling the graph and cutting it off was necessary to see the rest of the graph.

Excluding the full blocks, the block utilisation looks like a normal distribution, skewed towards mostly empty blocks for 16 and 32 byte objects, and towards mostly full blocks for 48 byte objects. Blocks for larger objects havn’t been shown they a generally full or close to full. There are no empty blocks for any size category. Presumably they are returned immediately to the free block list when they become free. The following table provides a little more information about block utilisation.

Table 2. Block usage statistics Object size Max objs/block Partially full blocks No. Percentage Mean objects Mean utilisation 16 256 68,470 46% 46 18% 32 128 12,501 77% 22 17% 48 85 14,710 27% 61 72%