In the 360 blog (sub-heading, “12 tables, 24 chairs, and plenty of chalk”), blogger Ξ (Xi) recently wrote about “Ethiopian Multiplication” (and followed it up with a series of interesting posts on different ways to multiply, here, here, here, and here, so far).

Here’s the “Ethiopian” version, as Xi tells it:

Here’s the basic idea: Suppose you want to multiply two numbers like 14 and 12. You could use your fingers, of course, but here’s another way: Start with the two numbers on top. Halve one, ignoring any remainders or fractions, and double the other, stopping when you get to 1. 14&12

7&24

3 & 48 [See how I ignored the fact that halving 7 leaves 1 left over?]

1 & 96 <— Stop here. Now look at the numbers on the right. Some are across from an even number: in this case, 12 is across from the original 14. Ignore those, and add the rest. So we’ll add 24, 48, and 96, which were across from odd numbers, and get 168. And that’s the product! Isn’t that cool?

0 14 12 (12 × 20) 1 7 24 (12 × 21) 1 3 48 (12 × 22) 1 1 96 (12 × 23)

It’s cool, because it’s binary... but it’s not at all surprising that it works. Look at what we’ve really done: In the left column, we’ve represented the multiplier (14, here) in binary, where even numbers represent 0 and odds represent 1. In the right column, we’ve multiplied the multiplicand (12, here) by the corresponding powers of 2. Thus:

By “ignoring” the numbers on the right that match even numbers on the left, we’re omitting the entries that correspond to a binary zero.

Think about how we learn to do long multiplication in school. If we multiply 12 by 14 using that method, it looks like this:

1 2

1 4

------

4 8 (12 * 4 * 100)

1 2 (12 * 1 * 101)

------

1 6 8

We use the visual left-shift on the second line of the answer to represent multiplication by another 10 (the shifted “12” really means “120”).

And that’s exactly what the Ethiopians were doing, except instead of doing it in decimal (powers of ten), they were doing it in binary (powers of 2):

1 1 0 0 (binary representation of 12)

1 1 1 0 (binary representation of 14)

--------

0 0 0 0 (12 * 0 * 20)

1 1 0 0 (12 * 1 * 21)

1 1 0 0 (12 * 1 * 22)

1 1 0 0 (12 * 1 * 23)

---------------

1 0 1 0 1 0 0 0 (binary representation of 168)

Now, why the Ethiopians did their multiplication in binary isn’t clear, but one can surmise that it’s easier to learn to double and halve arbitrary numbers than it is to learn the units multiplication table (quick!: What’s 8 times 7?).

But here’s the thing: now that we’re using digital computers, we can readily see that this is essentially how computers do multiplication internally. Basic logic circuits to take binary numbers and add them or shift them are easy, and that’s really all this has: a one-bit shift to the left doubles, while a one-bit shift to the right halves, and then we add things up at the end (or, really, as we go). So to look at the algorithm a computer would use, let’s assume we have a computer with at least three “registers”, and a “condition” indicator that’s set if we shift a “1” bit off the end of a register.

01 Load first number into R1

02 Load second number into R2

03 Set R3 to zero

04 Shift R1 right one bit

05 If condition is not set, jump to 07

06 Add R2 into R3

07 If R1 is zero, jump to 10

08 Shift R2 left one bit

09 Jump to 04

10 Store R3 as answer

If we’re robust about it, we’ll also check for an overflow after step 06 (the answer is too large for our registers), but that’s basically it. It’s a simple, fast method, requiring the simplest computer hardware instructions.

To see how it works with the 14 * 12 example, we’ll run through it:

Instruction R1 R2 R3 cond 01 Load first number into R1 14 02 Load second number into R2 14 12 03 Set R3 to zero 14 12 0 04 Shift R1 right one bit 7 12 0 no 05 If condition is not set, jump to 07 no (jump) 07 If R1 is zero, jump to 10 (no jump) 08 Shift R2 left one bit 7 24 0 no 09 Jump to 04 04 Shift R1 right one bit 3 24 0 yes 05 If condition is not set, jump to 07 yes (no jump) 06 Add R2 into R3 3 24 24 07 If R1 is zero, jump to 10 (no jump) 08 Shift R2 left one bit 3 48 24 09 Jump to 04 04 Shift R1 right one bit 1 48 24 yes 05 If condition is not set, jump to 07 yes (no jump) 06 Add R2 into R3 1 48 72 07 If R1 is zero, jump to 10 (no jump) 08 Shift R2 left one bit 1 96 72 09 Jump to 04 04 Shift R1 right one bit 0 96 72 yes 05 If condition is not set, jump to 07 yes (no jump) 06 Add R2 into R3 0 96 168 07 If R1 is zero, jump to 10 (jump) 10 Store R3 as answer 0 96 168

The interesting thing is that if you ask beginning computer students to write a low-level program for multiplication, most of them will use the old lesson that “multiplication is repeated addition”, and do something like this:

01 Load first number into R1

02 Load second number into R2

03 Set R3 to zero

04 If R1 is zero, jump to 08

05 Add R2 into R3

06 Decrement R1

07 Jump to 04

08 Store R3 as answer

The program is two steps shorter... but it will take much longer to run, because the loop is dependent on the value of the number, rather than on the number of bits (that is, the problem is now order n, as opposed to order log(n)).