FYI.

This story is over 5 years old.

Tech

Why We're Not Allowed To Divide By Zero

Weekend fun with numbers.
Image: US Navy

In September of 1997, the USS Yorktown, a US Navy warship, experienced "an engineering local area network casualty" during maneuvers off the coast of Virginia. The missile cruiser, a prototype for the Navy's PC-based Smart Ship program, had quite suddenly become dead in the water, with its propulsion systems rendered useless. Reportedly, the ship drifted helplessly for over two hours before the sailors managed to regain control. The enemy in this case was a single "0."

Advertisement

The Yorktown's Smart Ship setup consisted of 27 200-MHz Pentium Pro PC computers running Windows NT and connected via a high-speed fiber optic network. The goal was automation, with the computers replacing about 10 percent of the cruiser's sailors and, thus, saving the Navy some $2.8 million a year.

The failure was due to a human crew member entering the number 0 into a database entry field. This led to the computer attempting to divide by 0 and crashing as the result of a subsequent buffer overrun, a computing error in which a system begins overwriting already-allocated memory slots, with all sorts of possibly undesirable outcomes, such as rendering a warship useless. The error spread throughout the network.

Division by zero is an peculiar thing, but especially so within computing. If, within the execution of a given bit of software, a computer encounters an attempt to divide by 0, that 0 can act like a stick jabbed through the spokes of a bicycle: one thing stops executing (the wheel), then another thing that depends on the first thing (the bike frame itself) stops, and then, finally, the person on the bike just gets tossed off as the whole bike fails.

So, hopefully the programmer has ensured that division by zero just isn't possible within their bit of software, but programming languages themselves often act as a backup just in case. So, if some Java program detects a zero in the wrong place, it will throw what's called an exception, which is bad but not as bad.

Advertisement

Given an exception, rather than allowing the illegal zero to go through and break everything, the computer's whole current state of code execution is saved so that the problem can be fixed and execution can go on normally afterward, with nothing actually busted. If division by zero is encountered by a more stripped-down, "fundamental" (lower level) language like C or C++, the result is "undefined behavior." It all depends on the context of the division itself—what depends on that particular division and why?—and the result might be no problems at all or a foundering warship.

What exactly is the problem with division by zero in the very first place? Our intuition tells us first off that division by 0 is pointless or absurd—dividing something into nothing just isn't a real concept. And yet we are allowed to divide by plenty of other things that might also seem absurd, like irrational numbers such as pi and the square root of 2. We can also legally divide by infinity or negative infinity. And then there are the imaginary numbers, a collection of numbers used in math that aren't allowed to exist in the real world of numbers and counting. There is no square root of -1, but we do it anyway.

To see the problem, we need to restate what division even is and look at it through the lens of multiplication, which division can be viewed as a restatement or rearrangement of. So, if you had the division 10 / 5 = 2, algebraically that's the same thing as 10 = (5) * (2), right? The two equations say the exact same thing.

Advertisement

Now try, 10 / 0 = x, where x is any value imaginable. That would mean that x * 0 is 10, but we know that 0 times anything is just 0. The result is a contradiction, which is the actual mathematical proof of division by zero being invalid.

Division by zero has no meaning within the very definition of division.

The proof can be restated in even more intuitive terms. Think about what division is, just the word "division." It's separating things from other things, a special case of subtraction. Imagine if you were given 10 dollar bills with the instructions that you are to take these 10 bills into the next room and distribute them evenly among the people in that room. You will give every person x number of bills, subtracting that number from the total, starting at 10. You will have finished the division task only when the bills have been divided, naturally. This seems like an easy enough thing to do.

Or, rather, it would be easy if there were any people at all in the room. It's empty (and you don't count). How do you divide a number of things among no things? Indeed, when we say that division by zero is undefined, it's meant in the most literal sense. Division by zero has no meaning within the definition of division. It's fun to think about.

As thought-dessert, consider one special case of sorts. Back when we were talking about computers and 0, something got left out. Zero is illegal usually only for the integer data type. That is, when we're dividing whole numbers with no leftover decimals or fractions. Decimal numbers, known as floating points, often have a sort of fudge built-in. So, if we're dealing with the floating point data type, the language will allow not only zeros but positive and negative zeros. It works out as such: division by positive 0 yields positive infinity, while division by negative zero yields negative infinity

The point of this is to ensure that positive and negative signs are preserved when a program is dealing with very, very, very small non-zero numbers. It's possible for a number to be so small, with so many leading zeros, that the data type can't hold it all within its allocated memory and instead winds up storing just a bunch of zeros, with the numerical substance trimmed off. The result of division by such a small number would likewise be too big to store, so we get infinity. It kind of makes sense.