Zero-crossing logic for robust meshing

When designing with implicit surfaces and distance fields, we eventually want to generate a mesh from the pure math structures.

There are many algorithms for meshing distance fields, and they usually involve sampling the field on some kind of regular structure (like a grid or octree). In general, when evaluating at a point $ \left(x, y, z\right) $, we declare

There's an ambiguity if the function is exactly zero on the grid point. How do we resolve such cases?

The naive solution

On first glance, you may think that you could pick one or the other ("zero is inside" or "zero is outside").

Let's explore that strategy and see where it breaks down.

Consider the following shape: $$ f(x, y, z) = \max(\max(y - 1, -1 - y), \min(\max(-1 - x, x), \max(x - 1, -x))) $$

This looks like a pair of 1x2 rectangles, pressed directly against each other on the Y axis. Here's a height-map visualization of this distance field, where the inside of the model is blue and the outside is red:

Squares

If we want to turn this distance field into a set of 2D contours, I'd expect to get one contour out (which traces the square from -1 to 1 on both axes):

Arrow

As you may notice, there's a crack down the middle where the field's value of zero. No problem, you say: field values of zero should be treated as inside the shape.

Now, the trap springs shut. Consider $ g(x, y, z) = -f(x, y, z) $:

Inverted squares

By symmetry, we'd expect a 2D contour from this distance field to be the same but for the winding direction (which should be reversed). However, because the zero-valued crack is treated as "inside", the contouring gives us two rectangles instead of one.

Inverted arrow

Clearly, a more subtle approach is necessary.

The robust solution

The robust solution is as follows:

Given a point $ p = (x, y, z) $ where $ f(p) = 0$, $p$ is inside the model if and only if

For intuition, think about searching in the neighborhood of a particular point. If every point in the neighboorhood is inside the model (i.e. negative), then we're inside the model; otherwise we're outside.

Here's a 1D slice through our previous 2D distance field, on $y = 0$:

1D slice

As you can see, at $x = 0$, no matter which direction we go, we'll end up with a negative value.

Though the 1D case is obvious, in 2D and 3D, we can't check every possible epsilon $\epsilon$. In addition, we don't want to pin the test to a particular characteristic length. Given these constraints, your automatic differentiation sense may be tingling: is there anything clever we can do with the function's derivatives?

At $x = 0$, we immediately see an issue: the function isn't $C^1$ continuous. Ignoring unused clauses, we have

$$ f(x) = \min(x, -x) $$

so the derivative $f'(x)$ is both 1 or -1.

Notice, however, that for some small value $\delta > 0$,

Generalizing this observation, for every $C^1$-discontinuous math node

$$ n_i(p) = \min\left(a(p), b(p)\right) \text{ where } a(p) = b(p)\ $$

we can find some $e_i = b'(p) - a'(p)$ such that when we move incrementally in the direction $e_i$, the branch selects $a'$ over $b'$.

(similar logic applies for $\max$, but with $a' - b'$)

If we apply this logic to every $C^1$-discontinuous math node, we end up with a set of directions $e_1, e_2, ... e_n$ that favor the first branch for every discontinuous node. Their negations $-e_1, -e_2, ... -e_n$ favor the second branch.

We can pick a set of branch decisions $\left[\pm e_1, \pm e_2, ... \pm e_n\right] $, which implies $2^n$ possible combinations. In other words, we want to pick a set of sign bits $\left[s_1, s_2, ... s_n\right] $ where $s_i \in {-1, 1}$. Relating this back to design representations, we'll call such a set a feature.

However, not all features are legitimate. Consider the expression $$ h(x) = \min(x, 2x) + \max(x, 2x) $$

Evaluated at at $ x = 0 $, we find

On first glance, this implies $2^2 = 4$ possible options:

Let's go to the graph:

Tricky

Surprisingly, we see a constant slope of 3. This is because some of the $e_i$ combinations are mutually exclusive; only the ones with slope 3 are valid.

This can be justified intuitively: if you offset $x$ such that $\min$ takes its first branch, then $\max$ will take its second branch, and vice versa. The impossible sets of $e_i$ values jitter $x$ one way for $\min$ and the other way for $\max$, which is nonsensical: $x$ is a single value, so it can only be offset in a single direction.

Now, we come to the elegant geometry problem at the heart of this problem: how do we check if a feature is internally consistent?

Remember our earlier set of sign variables $s_1, s_2 .. s_n$ which are either $1$ or $-1$.
This set of sign variables is consistent if there exists some vector $v$ such that

$$ (s_ie_i)\cdot v > 0 \textsf{ for all } i \in [1,n] $$

This seems like magic, but emerges from the structure: when this condition is met, each $\min$ and $\max$ node goes down the chosen branch when evaluated with an offset of $v$. For inconsistent cases, finding such a $v$ is impossible.

Actually finding such a $v$ is left as an exercise for the reader. There's an obviously $O(n^3)$ solution, and many more clever options; if you find an $O(n)$ solution, send me an email!


Okay, let's take a deep breath and remember where we are in the problem stack: We've figured out a way to check if a specific feature is valid. A feature consists of

Next, for each valid feature, we check whether adding the top-level derivative ($h'$ in the example above) to the set (as $e_{n+1} = h'$, $s_{n+1} = 1$) makes the set invalid.

If this makes the set invalid, then the local gradient is negative for this particular feature: when we move in the direction prescribed by the feature, the top-level distance value must go down.

If the local gradient is negative for every valid feature then we've met our top-level condition for checking whether that point is within the model!

That's it: we've arrived back at our highest-level rule.

Let's work through our earlier example $$ f(x) = \min(x, -x) \textsf{ at }x = 0 $$ We find $e_1 = \frac d {dx}\left(-x\right) - \frac d {dx} x = -2$.

For the feature $\left[s_1 = 1\right]$, the top-level derivative is 1, which is incompatible with $+e_1$. In other words, with $x = 0 - \delta$, $f(x) < 0$.

Similarly, with $\left[s_1 = -1\right]$, the top-level derivative is -1, which is incompatible with $-e_1$: with $x = 0 + \delta$, $f(x) < 0$.

Having checked all of our relevant features, we can conclusively say that

meaning $f(0)$ is inside our model.

If you run through the same exercise for $g(0)$ (remember, $g(p) = -f(p)$) you'll find that $g(0 + \delta) > 0$, which puts the point outside our model.

This means that both $f$ and $g$ will be contoured correctly, resolving the ambiguity conflict at zero crossings.


Images were generated by this IPython notebook.