

Basically, it's what happens when a number is too big to fit inside the specified integer "type" that has been dimensioned inside the object's memory space. The usual sizes are 8bit, 16bit, and 32bit integers, which each come in "signed" or "unsigned" flavors. Basically, 1 of the bits gets sacrificed to tell if the number is negative or not, which basically divides the max value in half, -1.



Eg, an unsigned 8bit integer is maxed out at 255. If it is signed, it is between -127 and 127, with a zero.



For 16 bit it is 0 to 65535, and -32767 to 32767, iirc.



32bit is a value between 0 and 4 billion something--

You get the idea.



Here, we *may* have something like this going down:



If you try to add a number to, let's say, an 8 bit unsigned integer that is greater than 255, it is garanteed to overflow, because the memory space can't hold more than 8 bits. The extra bits "overflow" into the next byte over in memory, which could be data for some other attribute.



Let's say for the point of argument, since I haven't looked at the save, that the falling goblin hits the spear trap, and recieves a value for damage that is calculated based on the height of his fall, his armor rating, and the parameters of the weapon in the trap. Good. Now, this number is "presumed" to never be bigger than a certain value, based on the integer type toady picked for the object's memory space. If the amount of damage is greater than that, the math op will attempt to subtract a value that is bigger than the memory space allows, and part of the math gets perfrmed on the next over value in memory. -- so, let's again presume that toady has picked unint8 as the integer type. However, the damage of the fall is something like 512, or some similarly big number, since it is calculated on the fly-- when the game subtracts that value, only the bottom 8 bits of the computation will hit the goblin's HP system, the rest of the bits get applied next door. If the bottom 8 bits only add up to something like 1 or 2, the goblin takes very little damage from the fall and trap, and instead gets the upper bits applied to the next memory byte in the stack, which could well be his weapon skill memory location. Thus the fall's damage calculation actually overwrites the old value for his weapon skill, because the amount of damage is just too big to happen correctly.





Observe:



We have 3 values next to each other in memory.



XXXX, YYYY, ZZ



The first is a 16bit value, the next is another 16bit value, and the last is an 8bit value.

We'll fictionally assign the following to each location



Unint16: Armor User

Unint16: primary weapon skill

Unit8: HP



And some fictional values...



00F0, 0020, DE



(He's a pretty tough goblin afterall.)



Ok, our deadly spike trap deals, after all damage calcs are done, something like 80,000 points of damage.



In hexidecimal, that is a value of 13880.



Because the math overflows bigtime, any checking routines on the other 2 stat values won't get fired; the code currently executing is handling the damage computation instead.



So, rather than see 3 discrete memory areas, the computation treats all 3 locations as one big value to subtract from



Eg, subtract 0x13880 from 0xF00020DE, yeilding 0xEFFEE85E



When we break that back down into how the game interprets that value after it overwrites those 3 memory spaces, we get:



00EF, FEE8, 5E



Or, in other words:



Gobbo mcgoblin just went from having a primary weapon skill of 32, to one of 65256, just like that. His armor user dropped by one point, and his HP went from 222 points to 94.



I don't know if that's what's happening here or not. I need data to dig through.

