Separate page on the question
-- Main.AdrianBowyer - 21 Nov 2005
Boolean Geometric Modelling, Open and Closed Sets, and the Nature of Physical Reality
Forget, for the moment, about boolean modelling of three-dimensional shapes and just think of the one-dimesional real number line. It has integers - 1, 2, 3 etc; fractions - 0.5 0.73 1.98 etc, and irrationals like pi and e (the irrationals, interestingly, being far more numerous than the first two - a bigger infinity...).
An interval is a chunk of the real line, for example all the infinitely many numbers between 1 and 2 inclusive. Mathematicians write this [1, 2]. But they also consider the almost identical interval that has all the numbers between 1 and 2 but not the ends; they write this (1, 2). The two are exactly the same length, but are not identical. The interval [1, 2] is called a closed interval and the interval (1, 2) is called an open interval. (The act of closure is turning an open interval into the corresponding closed one, if you were wondering where a current term in fashionable psychobabble originally came from...)
Intervals can also be closed at one end and open the other: [1, 2).
Now. Consider treating intervals as infinite sets of numbers (which is what they are) and see what happens when you do set-theoretic boolean unions, intersections and differences on them.
First some simple ones (^ is intersection, u union, and - difference); think about the following and convince yourself that they are true:
- [0, 5] ^ [1, 2] = [1, 2]
- [0, 5] ^ (1, 2) = (1, 2)
- [0, 1] u [0.5, 7] = [0, 7]
- [0, 1] ^ (0.5, 7] = (0.5, 1]
But what about this?
- [0, 1] - [0.5, 7] = [0, 0.5)
Subtracting the closed interval starting at 0.5 leaves us with an open interval that doesn't contain 0.5 (because we've subtracted it).
- [0, 1] - (0.5, 7] = [0, 0.5]
So subtracting an open interval gives a closed interval, and vice versa.
What about these?
- [0, 0.5] u [0.5, 1] = [0, 1]
- [0, 0.5] u (0.5, 1] = [0, 1]
Both those work. But
- [0, 0.5) u (0.5, 1] does not equal [0, 1]
the solitary number 0.5 is missing - the resulting set has a little gap, so little that it has no thickness at all, but it is still there.
Now what about these?
- [0, 0.5] ^ [0.5, 1] = 0.5
- [0, 0.5) ^ [0.5, 1] = the null set
Neither of the results are intervals at all...
Enough with the intervals already. But you can probably see where all this is going. It all works just the same in three dimensions when we do boolean operations on solids to make other solids. We can have both closed solids (they include their surface) or open ones (they don't), or mixed ones that include some of their surface but not all of it, like [1, 2).
And when we start doing boolean operations on these we create both open and closed sets, and - worse - two-dimensional slivers of no thickness (like the 0.5 on its own) and cracks of no thickness (like the interval that was just missing the one number 0.5). It's also not too hard to make one-dimensional wires, and the corresponding one-dimensional "holes".
The first attempt to fix this was done by Ari Requicha's research group at the start of geometric modelling back in the day. They came up with the idea of regularized boolean operators. These were just like ordinary union, intersection, and difference, but they were defined in such a way that they always resulted in fully-closed fully-three-dimensional objects. Everything that would give just two- or one-dimensional results was defined always to give the null set.
This fixed the whole problem from the mathematical standpoint completely.
But unfortunately, it didn't help much when people came to write geometric modelling software to run on real computers. The difficulty is that floating-point arithmetic can't distinguish between an open and a closed interval because it only works to finite precision. You can't get round the problem by working with rationals (one arbitrarily-long integer divided by another); these can do all fractions with perfect accuracy, but sooner or later a square-root drops in and we're in the world of irrational numbers; the machine still can't be precise and still can't distinguish open and closed.
The next fix was the one that all you programmers out there in cyberland have been screaming since I introduced the problem, like children at a pantomime yelling, "Behind you!" when the baddie appears. That fix was to put in a fudge-factor. Thus, if we do
- [0, 0.99997] u [1, 2] we get [0, 2]
if our fudge factor is 0.00004 or bigger.
We make the fudge factor some small fraction of a typical dimension of the three-dimensional object that we're trying to build using boolean operations, and pray.
Of course, our prayers aren't answered, for it is written by the Prophet Allen that, "Not only is there no God, but try getting a plumber on weekends." Think about unioning two objects with flat surfaces that are almost common, but at a slight angle to each other so that they are within the fudge distance at one end, but not at the other. We're back with our thin-sliver crack again. There are a host of other examples - in fact literally an infinite number of them. And then there are curved surfaces... Some systems even try to have a fudge factor on the fudge factor, which is some sort of counsel of desparation.
No-one has yet figured out a foolproof way to have a computer do boolean operations on solids and always have them come out as a sensible object. The best systems are ones that do have fudge factors, understand the nature of regularized booleans, and also allow the representation of non-manifold objects.
Non-manifold objects are ones that are allowed to have thin slivers and thin wires in them, and the system represents them as consistently as it can. But they are usually explicitly put in, not generated by accident as a result of - for example - a non-regularized intersection.
It is my opinion that it is not possible - in a finite computer - ever to eliminate these problems completely and to have fully-consistent and always-unambiguous three-dimensional boolean solids.
And that gives rise to a much more interesting question: are there similar necessary constraints for having objects of real three-dimensional extent in the real universe? The first, and most obvious, one is having all matter made of discrete atoms as opposed to an infinitely-divisible continuum - that is like the finite precision of floating-point arithmetic. The second is quantum mechanics - quantum uncertainty is like the boolean-modelling fudge factor. We evolved in a universe that - for that evolution to happen - had to have solids, liquids, gasses and a host of other physical entities and processes. Other universes might be different, and have different entities and processes, and have different - or no - evolution of information copying. But, if they contain phenomena of geometric extent, do they always have to have micro-structures analagous to atoms and quantum mechanics for the same sort of reasons that our geometric modelling software seems to need them?