— emm0sh (@emm0sh) September 19, 2024]]>

- By including differentiable, parametric models in engineering processes, engineering software can better interoperate between human and artificial designers.
- Existing CAD, CAM, and CAE tools can speak this language by adding differential interoperability to their APIs.
- We provide a visual introduction to differential engineering using a cantilevered beam.
- By examining the derivative of a rotation, we briefly unlock some deep math beauty and an application of Unit Gradient Fields (UGFs).
- Differentiable engineering scales to product-level systems engineering.

✏️ *Math advisory: this post assumes you’re okay with derivatives, the chain rule from basic calculus, and a little vector math. We will introduce intuitive visual tools to illustrate such concepts in design engineering. While I feel compelled to show the work, you can probably skim and glean the concepts from the illustrations.*

👥 *Lots of credit: These ideas came from discussions with many people, including:*

- Sandilya (Sandy) Kambampati, Intact Solutions
- Luke Church, Gradient Control Laboratories
- Trevor Laughlin, nTop
- Jon Hiller, PTC
- Peter Harman, Infinitive

Today, we practice three paradigms of computer-aided design (“CAD”), manufacturing (“CAM”), and engineering (“CAE”):

- One-off design, where the focus is producing individual parts or products;
- Parametric generative design, where the result is recipe to produce variants of similar parts or products; and
- Computational generative design, where the final geometry is guided by simulation, often iteratively and with spatially-varying parameters.

As each of these generations has built on earlier technology, the emerging generation of engineering software powered by artificial intelligence and machine learning algorithms (“AI/ML”) is being trained on existing empirical, simulated, and textbook knowledge. However, while this new generation of tools promises ease-of-use, more accurate results, and orders of magnitude faster performance, it does not yet offer a meaningful shift in interaction paradigm. As these new tools become increasingly sophisticated, will new interaction paradigms emerge? Will we realize the sci-fi vision of product-level generative co-designers?

Let’s examine how AI and ML can blend with today’s optimization tech to expand engineers’ navigable design space. As generative design scales to the subsystem and product level, we’ll demonstrate how to delegate tasks to AI and ML without the meaning becoming hidden in a nonintuitive latent spaces, as with LLMs and generative art. We’ll focus on the role of a designer, human or automated, expressed in the language of optimization and machine learning: a differentiable approach to design engineering.

Let’s propose a model for a design engineer, human or automated, which we’ll call “Mechanical Design Automation (MDA)”:

$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\point}[1]{\mathbf{#1}} $$
$$ \newcommand{\p}{\point{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\sgn}{\mathop{\rm sgn}} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

\(\newcommand{\Shape}{\Omega}\) \(\newcommand{\parameters}{\boldsymbol{\Theta}}\) \(\newcommand{\fitness}{\boldsymbol{F}}\) \(\newcommand{\CAD}{\text{CAD}}\) \(\newcommand{\CAE}{\text{CAE}}\) \(\newcommand{\MDA}{\text{MDA}}\)

There are three main roles in an engineering process:

- Model creation tools, like CAD and CAM systems, generate output like parts, assemblies, and manufacturing deliverables. We can model CAD as a function from parameters \(\parameters\) to designs or “shapes” \(\Shape\), \(\CAD\!: \parameters \mapsto \Shape\). We’ll focus on simple dimensional and angular parameters as in parametric CAD, but other parameter types include the feature recipe, explicit positions of mesh vertices, density values on a voxel model, material selection, etc.
- Engineering (CAE) tools measure various fitnesses \(\fitness\) of CAD designs, such as mass properties, testing mechanical and other physical properties, cost, environmental impact, and aesthetics. We can model engineering software as \(\CAE\!: \Shape \mapsto \fitness\).
- Finally, there is a designer interested in producing optimal designs by tuning the parameters to optimize fitness. \(\MDA\!: \parameters \mapsto \Shape \mapsto \fitness = \parameters \mapsto \fitness\). The
*design Jacobian*, described below, informs the designer how to update parameters to improve fitness.

✏️ *Math tip: the notation \(f: x \mapsto y\) defines the function \(y = f(x)\) and can be read as “maps to.”*

This system suggests that a designer cares not about the shape of their design, only its fitness. Paradoxically, most design engineers focus their work on drawing and documenting shapes! If industrial design and assembly context are taken as constraints, perhaps nonintuitive, topology optimized results are closer to home than we thought.

On the other hand, this model shows that a designer is most interested in navigating an available set of design parameters explore a large design space of shapes to achieve a fitness goal. Somehow, the designer needs to use a set of fitnesses to update the set of parameters. And we only have one shape \(\Shape\) for every vector of input parameters \(\parameters\) and vector of fitnesses \(\fitness\).

What is the map from fitnesses to parameters? It’s the inverse of MDA, the origin story for the term “inverse engineering.” In many cases, we might optimize a design by minimizing a scalar “loss function.” Various optimization techniques such as topology optimization and multi-disciplinary optimization (MDO) are examples of this setup.

How should a design engineer optimize a set of parameters to achieve the fitnesses that maximally satisfy all stakeholders, while also being theoretically unconcerned about the output shape? Given any shape and fitnesses, the designer must tweaks the design until it’s optimal. In such a world, it might be helpful to know in what direction and how far to adjust a parameter for the intended result. The tools of calculus provide such an estimate.

*Behavioral modeling in Creo aka “Pro/E”, courtesy of PTC. The plot shows a Pareto front of the trade-offs between two different fitness components while varying spatially constant parameters. Differentiable engineering can efficiently find and smooth such curves in a similar way to how CAD systems trace precise silhouette curves of curved surfaces in hidden-line views. (Silhouette curves are Pareto fronts, trading off the surface’s UV coordinates for coordinates on the projection target!)*

When I was just out of college as an PTC application engineer selling Pro/E, I used to demo parametrically optimizing a part through FEA. The first thing the solver would do is make a small change to the parameters to compute a slope to find the optimal value and iterate until it was close enough. The terms “gradient descent” and “Newton’s method” describe this process. In those days, it would take some time to regenerate the CAD part for each parameter set, so you had to do some story telling while it was computing that slope. In today’s state of the art, like nTop field optimization, we iterate over a similar loop, but with spatially varying fields, providing generalized shape and topology optimization for open-ended problems.

*Field optimization in nTop, animated by Brad Rothenberg. One spatially varying parameter, wall thickness, minimizes the deflection of a cantilever (fixed on the left, loaded on the right).*

It sure would have been nice if Pro/E knew that slope right away, freeing the optimizer from regenerating and simulating another variant just to estimate a derivative. As popularized by recent trends in machine learning, three forms of *automatic differentiation* enable us to more efficiently compute such *parametric sensitivities*:

**Symbolic differentiation**: If we have a mathematical expression, we can differentiate it one variable at a time to produce a new function for each derivative of interest.**Forward mode automatic differentiation**: Equivalent to the fascinating dual numbers, we maintain and update each parameter’s derivative with its value with every operation on its value. It can be straightforward to convert conventional code to forward mode using types.**Reverse mode automatic differentiation**: When computing, we build a structure that can be used to compute derivatives later, which is efficient when you have many inputs and only a few outputs. When differentiating though FEA and CFD simulations, a technique called the**adjoint method**makes reverse mode computationally efficient.

👥 For more nuance about automatic differentiation in optimization algorithms, see Nick McCleery’s thorough post on differentiable programming in engineering.

We now arrive at the heart of differentiable engineering, the chain rule through a shape:

\[\Large{\pdv{\fitness}{\parameters} = \pdv{\fitness}{\Shape} \pdv{\Shape}{\parameters}}\]✏️ *Math tip: read the partial derivative notation “\(\partial x\)” as the same as “\(dx\)” assuming all partials are independent, but be aware that there are more of them. If \(\p = (x, y)\), we express a vector of those partials as the gradient \(\grad f(\p) = \pdv{f}{\p} = \left(\pdv{f}{x}, \pdv{f}{y}\right)\), notation we reserve for spatial derivatives.*

Speaking in the language of differentials (and linear algebra):

- CAD systems are concerned with shapes’
*parametric sensitivities*\(\pdv{\Shape}{\parameters}\) (a row vector); - CAE systems determine shapes’
*functional sensitivities*\(\pdv{\fitness}{\Shape}\) (a column vector); and - MDA becomes the (outer) product of those two vectors, a matrix of derivatives that captures how each parameter contributes to each fitness. Such a matrix of partial derivatives is called a “Jacobian”, so it seems appropriate to call \(\pdv{\fitness}{\Shape} \pdv{\Shape}{\parameters}\) the
*design Jacobian*.

All design engineers, human or automated, serve to optimize the fitness of stakeholder deliverables via design Jacobians.

What \(\pdv{\fitness}{\parameters} = \pdv{\fitness}{\Shape} \pdv{\Shape}{\parameters}\) shows is that CAD tools can pass along differentials to CAE tools to compute design Jacobians. It shows that CAD and CAE vendors can work together to provide differentiable answers to engineers and optimization systems. Our industry has done it before: the Functional Mock-up Interface standardizes analogous interoperability over time derivatives to model one-dimensional, dynamic systems.

While the parameters may or may not vary spatially, we tend to evaluate the fitness of the entire shape, often by integrating over space. For example, the volume and surface area fitnesses are integrals over a shape’s domain and its boundary, respectively. Maximum, minimum, or average values like center of gravity may also roll-up fitness to a constant value. Note that the parametric derivatives become tallied up in such spatial integrations or consolidations. *Field optimization* implies that some spatially varying parameters do not get consolidated.

Let’s work through a simple engineering example: a cantilevered beam. We will represent it as an exact SDF for comparison with Mercury and IQ’s haikus, which are optimized for the GPU but complicate taking the derivative.

For our shape \(\Shape\), we’ll use a rectangle \(\shape{R}(\p; \point{s}_½)\) centered on the origin with size \(\point{s} = (w, h)\). Due to symmetry, we’ll parameterize via the half-size vector \(\point{s}_½ = \left(\frac{w}{2}, \frac{h}{2}\right)\), our parameter set (\(\parameters\) above). Given position \(\p\), let’s define:

\[\p_c \equiv \abs{\p} - \point{s_½} \,,\]where \(\abs{\cdot}\) is the absolute value of the components, which provides us with positive local coordinates centered on a corner of the rectangle. It’s as if we folded the rectangle in half twice and can now just work on the one corner. Then, given components of \(\p_c = (\p_{cx}, \p_{cy})\) and Euclidean norm (aka vector magnitude) \(\norm{\cdot}\), case-wise, we handle the regions closest to the vertex and then each side:

\[\shape{R} = \begin{cases} \norm{\p_c} \, , & \p_{cx} > 0 \text{ and } \p_{cy} > 0 \\ \begin{cases} \p_{cx} \, , & \p_{cx} \ge \p_{cy} \\ \p_{cy} \, , & \p_{cx} < \p_{cy} \end{cases} & \text{otherwise.} \end{cases}\]Let’s get a feel for rectangle field and its partial derivatives (click to sample the field):

What does it mean that the derivatives have value off of the boundary of the shape? Isn’t there only one shape? Yes, but under any offset of a shape by a constant, \(\Shape - \lambda\), the derivative of the offset \(\lambda\) vanishes. This magic X-ray vision (aka “first order approximation”) of derivative fields results in such constant values along streamlines of the gradient, creating a field *radiated* into and out of the shape from the boundary.

The case-wise construction simplifies taking derivatives, as the same cases apply. Here is our spatial gradient, which always points to the boundary:

\[\grad \shape{R} = \left( \frac{\partial\shape{R}}{\partial x} , \frac{\partial\shape{R}}{\partial y} \right) = \begin{cases} \left( \frac{\p_x}{\shape{R}} , \frac{\p_y}{\shape{R}} \right) \, , & \p_{cx} > 0 \text{ and } \p_{cy} > 0 \\ \begin{cases} (\sgn(\p_x), 0) \, , & \p_{cx} \ge \p_{cy} \\ (0, \sgn(\p_y)) \, , & \p_{cx} < \p_{cy} \end{cases} & \text{otherwise.} \end{cases}\]and our derivatives with respect to the half-size vector components:

\[\pdv{\shape{R}}{\point{s}_½} = \begin{cases} \left( \frac{-\p_{cx}}{\shape{R}} , \frac{-\p_{cy}}{\shape{R}} \right) \, , & \p_{cx} > 0 \text{ and } \p_{cy} > 0 \\ \begin{cases} (-1, 0) \, , & \p_{cx} \ge \p_{cy} \\ (0, -1) \, , & \p_{cx} < \p_{cy} \end{cases} & \text{otherwise.} \end{cases}\]Why the negative values? As the rectangle becomes bigger, the field values at locations on either side of it become smaller.

What if we want the derivatives with respect to \(w\) and \(h\)? \(\pdv{\point{s}_½x}{w}\) and \(\pdv{\point{s}_½y}{h}\) are both \(\frac{1}{2}\), so \(\pdv{\shape{R}}{\point{s}} = \frac{1}{2} \pdv{\shape{R}}{\point{s}_½}\) .

Let’s introduce some fitness properties (\(\fitness\)) for our rectangle \(\shape{R}\), which, despite its symmetry, could be a cantilevered beam adding depth \(d\). We’ll fix one side and put a downward load \(f\) on the other. We can then write down our textbook deflection formula for volume \(V\), sectional inertia \(I\), and deflection \(\delta\) given constant elastic modulus \(E\):

\[\begin{align} V &= w h d \;, \\[1ex] I &= \frac{d h^3}{12} \;, \\[1ex] \delta &= \frac{f w^3}{3 E I} = \frac{4 f w^3}{E d h^3} \;. \end{align}\]Which are clear functions of the basic dimensions of \(\shape{R}\). We don’t need to pull out the chain rule through a shape, as that work is already baked into these formulae from the integrals in their construction. Observe that the volume calculation for our cuboid beam is equivalent to an approximate Riemann integral with one big element. We can work with any kind of geometry across which we can integrate, the trick used by meshless approaches to simulate physics on geometry unsuitable for finite element meshing. Intact Solutions focuses on this kind of approach to simulation, and toolkits like FEniCS provide a differentiable physics when meshing is convenient.

Here, we can just analytically differentiate with respect to \(point{s}\):

\[\begin{align} \pdv{V}{\point{s}} &= (h d, w d) \;, \\[1ex] \pdv{\delta}{\point{s}} &= \frac{12 f}{E d} \left(\frac{w^2}{h^3}, -\frac{w^3}{h^4}\right) \;. \end{align}\]These values, the rows of our design Jacobian \(\pdv{\fitness}{\parameters}\), are constant with respect to space.

Modern optimization tools add more parameters to the model, for example this parameterized cantilever from Sandy at Intact, optimized using derivatives with respect to parameters at each arrow:

Other kinds of shape optimization might use the positions of mesh vertices or subdivision surface control points. Topology optimization extends this concept using voxel-based parameters for density or implicit boundary values.

Let’s take a look at the derivative of rotating a shape, noting that a translation leaves the derivative unchanged. As our rectangle rotates about its center, we can measure how much each increment adds material to or removes material from each face. Another way to think about this rotational derivative is how much each surface element would be facing wind or in the lee of the wind while it turns. In regions where the material is being added, the derivative is negative, the same sign conventions as when resizing the rectangle.

Let’s start by illustrating the rotational derivative on explicit geometry. Using differentiable boundary models, it’s possible to compute such derivatives directly on the surface. For example, here is the rotational derivative through the center of the box visualized on the faces using Engineering Sketch Pad.

As a casual mathematician, it’s rare to get a window into the inner workings of math. The *rotational derivative* our rectangle provides such an opportunity. Consider rotating our rectangle \(\shape{R}(\p; \point{s}_½)\) through its center by angle \(\alpha\) by remapping via the transformation \(T(\alpha)\!: \p \mapsto \p'\), where:

Differentiating with respect to \(\alpha\):

\[\begin{align} \pdv{T}{\alpha} &= \begin{bmatrix} -\sin(\alpha) & \phantom{-}\cos(\alpha)\\ -\cos(\alpha) & -\sin(\alpha) \end{bmatrix} \\[1ex] &= \begin{bmatrix} \phantom{-}\cos\left(\alpha + \frac{\pi}{2}\right) & \sin\left(\alpha + \frac{\pi}{2}\right)\\ -\sin\left(\alpha + \frac{\pi}{2}\right) & \cos\left(\alpha + \frac{\pi}{2}\right) \end{bmatrix} \\[1ex] &= T\left(\alpha + \small{\frac{\pi}{2}}\right) \\[1ex] &= T(\alpha) \; T\!\left(\small{\frac{\pi}{2}}\right) \,. \end{align}\]Therefore, taking the derivative of a rotation is the same thing as adding a quarter turn to the rotation, evidence of some deeper math. In complex analysis, we learn this operation as multiplying by \(i\). In differential forms, we have the complex structure \(J\) that lets us perform rotations around surface normals. Geometric Algebra generalizes the notation to *rotors*. All of these concepts square to \(-1\).

Before we perform its derivation, let’s take a look at \(\pdv{\shape{R}}{\alpha}\) with fields rotated through the center of the image. The animation shows the complex structure of the derivative of a rotation by superimposing it with the original field. We illustrate the rotation by creating a family of (“pencil”) by performing *rotational interpolation* on \(\shape{R}\) and \(\pdv{\shape{R}}{\alpha}\):

while animating \(\phi\):

How to interpret the animation? It’s similar to the rotating box, but we are holding the shape fixed and rotating space around it, which seems to me to be the nature of the complex structure induced by a rotation. Observe that the rotating space passes through the boundary almost as if we were observing the wind on the rotating box, but from the reference frame of our rectangle.

Let’s take the derivative of \(\shape{R}(\p')\) with respect to \(\alpha\), \(\pdv{\shape{R}}{\alpha}\). We start with the rotated scalar field \(\shape{R}(\p')\), where \(\p' = T(\alpha)\,\p\) is the rotated position vector:

\[\pdv{\shape{R}}{\alpha} = \pdv{\shape{R}}{\p'} \cdot \pdv{\p'}{\alpha}\]Using the chain rule on the second term, we have:

\[\pdv{\p'}{\alpha} = \pdv{}{\alpha}(T(\alpha)\,\p) = \pdv{T}{\alpha}\,\p = T(\alpha) \; T\!\left(\small{\frac{\pi}{2}}\right) \p\]Substituting into the earlier expression and defining our quarter-turned \(\p\) as \(\p_i \equiv T\!\left(\frac{\pi}{2}\right) \p\):

\[\pdv{\shape{R}}{\alpha} = \pdv{\shape{R}}{\p'} \cdot T(\alpha) \p_i = \nabla_{\p'}\shape{R} \cdot T(\alpha) \p_i\]✏️ *Math tip: \(\nabla_{\p'}\shape{R}\) shows the gradient \(\pdv{\shape{R}}{\p'}\) that in the spatially transformed basis \({\p'}\).*

In 2D, \(T\!\left(\frac{\pi}{2}\right)\!: (x, y) \mapsto (-y, x)\), the same operation as multiplying \(x + iy\) by \(i\).

If we are interested in \(\pdv{\shape{R}}{\alpha}\) for an unrotated object, we can set \(\alpha = 0\), so \(T(\alpha)\) becomes the identity:

\[\pdv{\shape{R}}{\alpha} = \nabla_{\p'}\shape{R} \cdot \p_i\]As our rotation \(T\) is a rigid motion, if \(\nabla_{\p}\shape{R}\) has the property that its gradient (everywhere defined) is unit magnitude, so does the transformed \(\nabla_{\p'}\shape{R}\). Therefore, the property that avoids stretching in the rotational derivative is the property of having unit gradient magnitude, the defining property of UGFs (and of which SDFs are subset). (If you dare try an implicit shape specified with a non-Euclidean metric instead of a UGF, open the Shadertoy and uncomment line 91.)

Once we have CAD and CAE systems wired to compute design Jacobians via \(\pdv{\fitness}{\parameters} = \pdv{\fitness}{\Shape} \pdv{\Shape}{\parameters}\), we can connect them into PLM frameworks and consider systems models of process- and product-scale generative design. For example, consider the problem of finding the optimal orientation to place a part in advanced manufacturing, where perhaps we want to minimize material consumption use while also minimizing deflection:

Given our placed CAD part \(\Shape(w, h, \alpha, \ldots)\), our supports \(\Psi(\Shape, \ldots)\), and fitnesses such as volume of \(\Psi\) and max deflection \(\delta\), then:

\[{\pdv{\fitness}{\alpha} = \pdv{\fitness}{\Psi} \pdv{\Psi}{\Shape} \pdv{\Shape}{\alpha}}\]When we derive one CAD model from another, as common in tooling like molding and casting, we can pass the differentials along via the chain rule. This process of modifying part geometry to improve manufacturing processes we call “design for manufacturing.” How far can go to model and trace such sensitivities, unlocking causality typically obscured by disjointed PLM processes?

What about derived parts through assembly and product structures? How would such sensitivities propagate? Multiple parts could contribute to parent CAD assemblies. Consider, for example, parts \(\Shape_1(\parameters_1)\) and \(\Shape_2(\parameters_2)\) assembled into Assembly \(\Shape_A\) via assembly placement parameters \(\parameters_A\) including separation distance \(d\):

Then:

\[\Shape_A = \Shape_A(\Shape_1, \Shape_2, \parameters_A) \;,\]and given analogously named fitnesses:

\[\fitness_A = \fitness_A(\Shape_A) = \fitness_A(\Shape_1, \Shape_2, \parameters_A) \;.\]This structure mirrors the V model of systems engineering, where a design process commences with high level fitness requirements and becomes subdivided into subsystems, subassemblies, and finally individual components for detailed design at the bottom of the V. One the way back up, integration and validation processes assure that the component-level fitnesses assemble into product-level fitnesses. These differentiable engineering pipelines appear to fit naturally into such PLM and product development methodologies.

As artificial intelligence becomes increasingly present in our engineering tools processes, differentiable engineering’s inherent compatibility with the V model may expedite integrating human engineers with artificial design aides to maximize product-scale product fitness. By modeling not only product content but how it evolves though it’s differentials, AI and ML may transform generative CAD and CAE into a new paradigm of mechanical design automation. While we are still imagining what form these new tools might take, it seems clear that they will connect via differentiable engineering.

With the launch of interactivity in nTop 3.0 three years ago, George Allen and I shared some research around “CodeReps”, which showed how we could export nTop data as pure code. Sandy from Intact not only performed simulations on these code reps, he also observed that we could take parametric derivatives of it for the purpose of optimization, should CodeReps become pervasive. Trevor from nTop also saw potential of geometry to be a parametric black box for optimization routines. Matt Keeter showed how to use such derivatives for parametric editing in libfive studio when explicitly declared, and Luke Church prototyped a 2D implicit modeler that provided UX on-the-fly with respect to local parametric sensitivities.

Around that time I started to notice the use of differentiable simulation pipelines, both in open source packages like FEniCS and research using the adjoint method. As Sandy started to show differential results via adjoints, my friends at Atomic Industries started building a fully differentiable modeling-through-multiphysics pipeline to train their AI to optimize mold design. About a year ago, Jon Hiller and I started regular conversations about the future of implicit modeling and generative design, and we became engaged in the challenge of federating separate CAD, CAM, and CAE tools through differentiable interfaces throughout PLM. Would it be possible to design such APIs to support the different differentiation techniques, such as forward, reverse, and symbolic approaches in a manner that could scale to product definitions? The 3D printed support challenge became our main working example.

In researching existing differentiable engineering tools, Peter Harman explained the strengths and weaknesses of FMI’s approach to parametric, differentiable interop and shared his experiences with symbolic differentiation in Modelica. Eventually, I test drove Engineering Sketch Pad and met Afshawn from Open Orion, who is making explicit differential tech usable for design engineers.

2024 appears to be a great year for differential engineering. In addition to the emerging tech above, nTop’s new kernel is built for derivatives, providing industrial strength support for automation and interop. Gradient Control Laboratories’ meta-kernel generates forward-mode AD while generating other useful manipulations like symbolic derivatives and UGF transformations. I expect both technologies to be used as black boxes to realize the first generation of differential interoperability.

Are you interested in using differentiable engineering across engineering tool chains? Please get in touch!

As I was finishing this post, it crossed the wire that Ken Versprille, the father of NURBSs and industry friend, has passed. I would have enjoyed hearing his thoughts on this material.

]]>There was a time when I could keep track of all the engineering software companies. We had a few big CAD and CAE vendors, a handful of smaller companies defying VC pressure, and a CAM company for every manufacturing market. 3D printers were things that our resellers lugged around but didn’t really work. Life was simple. I could keep it all in my head.

About a decade ago, 3D printing flooded our industry with post-2008 capital. STEM experiments evolved into our workforce, and “maker” culture became more popular than even SolidWorks. For the first time, consumer trends and mainstream software companies began to emulate our tech, and we all had to deal with triangle meshes. We made some of the 3D printers work, but the ethos of engineering evolved into a sci-fi future where autonomously generated designs are fabricated in Westworld slurries via robots murmuring in unlit, underground facilities. We all know what happened to Westworld.

*Westworld’s trunnion of the future*

While we gazed at our topology and field-optimized lattice parts, still sticky in our hands, and exclaimed that our “generative” and “computational” future had arrived, another industry was lurking. The same people who rendered our pretty pictures somehow started doing actual engineering on blockchain mining hardware. The next thing we knew, ML tech and AI solutions replaced out notion of “generative” with something more diffusive. And how did they do it? With the tricks of our trade: curve fitting, now rebranded as “surrogate modeling”, with MDO on steroids to interpolate massive parameter sweeps of the data.

It appears that when you take the nvidia Modulus training course, they now bundle in a Delaware C corp kit in partnership with Y Combinator. There are a few dozen startups using physics-powered neural nets on fluid dynamics problems, forever freeing CFD from the concept of Reynolds number until you want to validate your results with “empirical” data from OpenFoam. Will 1000x faster solve times grow the market for CFD and perhaps CAE in general?

Dare we try to ask our incumbent friends, we may have difficulty finding them. A decade or so of consolidation has made it impossible to know what web site maps to what product. Our sleepy little industry is seeing attention from the downright comatose EDA market. How to keep track of it all?

In the fall of ‘23, Alex Huckstepp reached out to me with half dozen or so AI and ML engineering software companies he was talking to and asked me for a take. I didn’t know much about them, but had a similar list of friends, so I added it to his spreadsheet. Then I met up with Andy Fine, who was digging into the CFD side of things. Brad Rotherberg from nTop was working on his own list, and Luke Church, my partner at Gradient Control Laboratories, added some of the AEC startups he knew. After Alex and Andy posted some slides with some of the initial names, we learned that everybody likes being part of slides with lots of names. I tend not to be part of clubs that would have me as a member, but with feedback form LinkedIn, we adding everybody plausibly “generative”, doubling the list to about 50. It an attempt to prevent Alex from converting to paid on LogoIntern, I decided to interpret the data, trying to recall why I vowed never to use d3.js again. (TLDR: it make Javascript read like PERL.)

Data science professionals might tell you that interactive data bubbles, or even the idea of using bubbles to represent data, is “chartjunk”, but we’re going for entertainment value. So enjoy a little play with our industry at the da Vinci of data’s expense while you contemplate the quality of this data. At the moment of publication, many of the 200+ entries are incomplete. I’m sure there are tons of omissions or problems, but the data is live. Feel free to post a correction.

While this chart is not data science, it is data driven. Here’s the basic idea behind the process:

Each company or technology receives a vector of *industries*, including: `MCAD`

, `CAE`

, `CFD`

, `CAM/MES`

, `EDA`

, `AEC`

, and `IM/PM`

, and *qualities*, including `AI/ML`

, `Generative`

,`PDM/PLM`

, `VnV/SCM`

, `Hardware`

, and `Ecosystem/Community`

. (See tooltips to expand acronyms.) Also vendors of *components* of `B_rep`

, `Implicit`

, and `Physics`

kernels are scored. All of these tags get a:

`1`

if they are building that product or entering that space.`2`

if they are established in that space.`Name`

if they are recognizable or referenceable brands, which count as`2`

each.

In addition, there’s a list of `components`

, critical technologies like Parasolid, CUDA, or Nastran on which a product might depend as well as `partnerships`

that are valuable but easier to change, including interop toolkits.

From this data we generate the following display data dimensions:

- Each industry is assigned a hue, and each vendor’s scoring comes from a weighted average of its industry score fitness vector. Hue corresponds to the appropriate place in a horseshoe model where nano-scale IC fabrication meets the industrial manufacturing required to achieve it.
- The size of each circle represents the funding stage of the company plus its combined weighed score.
- All of the qualities and components are assigned two scores for
`moat`

and`dank`

, which get dotted with the fitness vectors.- The moat score represents how hard a company’s technology would be to reproduce, displayed as border width.
- The dankness refers to the trendiness of the tech market, displayed by a darker color. The OG name for this project was the “dank tech landscape”, but it didn’t Google well. Hardware emits negative dankness.

- And then there’s metadata for founding date, the headquarters location, and a URL.

I have no idea how enduring this project will be. Will probably add a legend for the info above and some more controls for controlling forces. Thanks to Mariana Marasoiu from Gradient Control Laboratories for suggesting vertlet over a T-SNE embedding and to Michael from Intact.Solutions, who requested the `Disrupt`

button.

Would appreciate your feedback and would be happy to include any company that produces some sort of engineering software. For now, please enjoy, and happy April 1!

]]>Over on the LatticeRobot blog, a long post about our approach to LatticeRobot’s parameterization.

In addition, it’s worth mentioning that these parameterizations are build on top of Gradient Control Laboratories’ high level implicit scripting language, GCL Script (“GCLS”), provides a novel, high level API to implicits and powers LatticeRobot’s CodeRep output. Please be in touch if you’d like to learn more.

]]>*Use the slider to change viewing modes:*

$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\point}[1]{\mathbf{#1}} $$
$$ \newcommand{\p}{\point{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\sgn}{\mathop{\rm sgn}} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

*The clearance field, \(\ugf{A} + \ugf{B}\,\), the midsurface field, \(\ugf{A} - \ugf{B}\,\), and the two-body field: \(\twobody{\ugf{A}}{\ugf{B}} \equiv \frac{\ugf{A} - \ugf{B}}{\ugf{A} + \ugf{B}}\). The clearance and midsurface fields are overlaid to demonstrate their orthogonality.*

There are few concepts to unpack. First, lets just get a feel for why the sum and difference perform as clearance and midsurface fields. Let’s use one-dimensional functions on a line to represent the section between opposing shapes.

$$ \ugf{A} $$ |
-2 | -1 | 0 | +1 | +2 | +3 | +4 | |

$$ \ugf{B} $$ |
+4 | +3 | +2 | +1 | 0 | -1 | -2 | |

$$ \ugf{A} + \ugf{B} $$ |
+2 | +2 | +2 | +2 | +2 | +2 | +2 | |

$$ \ugf{A} - \ugf{B} $$ |
-6 | -4 | -2 | 0 | +2 | +4 | +6 | |

$$ \twobody{\ugf{A}}{\ugf{B}} $$ |
-3 | -2 | -1 | 0 | +1 | +2 | +3 |

Clearly, the sum indicates the clearance. The difference is the midsurface field, but scaled by a factor of two. If one zooms out of the sum field to the far side of either shape, the sum also doubles up far from the midsurface, so we normalize by two in the Shadertoy above.

The two-body field, \(\twobody{\ugf{A}}{\ugf{B}} \equiv \frac{\ugf{A} - \ugf{B}}{\ugf{A} + \ugf{B}}\), clearly ranges in \([-1, 1]\) in the region not contained in either of the shapes, creating a predictable parametric space for modulation, interpolation, and remapping.

In the visualization, the sum and difference fields are clearly orthogonal, but why? Algebraically, it works out trivially enough, recalling that UGFs have unit gradient magnitude by definition and that orthogonal vectors dot to zero:

\[\begin{aligned} \inner{(\grad\ugf{A} + \grad\ugf{B}\,)}{(\grad\ugf{A} - \grad\ugf{B}\,)} &= \\ \inner{\grad\ugf{A}\,}{(\grad\ugf{A}\; - \grad\ugf{B}\,)} + \inner{\grad\ugf{B}\,}{(\grad\ugf{A}\; - \grad\ugf{B}\,)} &= \\ \inner{\grad\ugf{A}}{\!\grad\ugf{A}}\; - \inner{\grad\ugf{A}}{\!\grad\ugf{B}}\; + \inner{\grad\ugf{B}}{\!\grad\ugf{A}}\; - \inner{\grad\ugf{B}}{\!\grad\ugf{B}} &= \\ 1 - \inner{\grad\ugf{A}}{\!\grad\ugf{B}} + \inner{\grad\ugf{A}}{\!\grad\ugf{B}}\; - 1 &= 0 \;. \end{aligned}\]However, those of us from the Tristan Needham school of analysis might prefer a more geometric explanation:

The key observation is that when \(\ugf{A}\,\) and \(\ugf{B}\,\) are UGFs, the sum and difference gradient vectors form the diagonals of a rhombus, and therefore, are orthogonal. Note that this rhombus is contained in the normal cone of the fields’ intersection (green).

The sum and difference fields, \(S = \ugf{A}\, + \ugf{B}\,\) and \(D = \ugf{A}\, - \ugf{B}\,\), produce an orthogonal basis and using the Sampson Norm, \(\sampson{F} \equiv \frac{F}{\norm{\grad{F}}}\), \(\sampson{S}\) and \(\sampson{D}\) form an orthonormal basis, which can be a useful way of approximating distance-to-curve and constructing edge treatments. Perhaps we’ll do a deeper dive on this topic in a future post, but here’s a teaser from some old twitter threads.

The two-body field can be a convenient alternative to SDFs for modulating other fields and interpolating shape. In engineering applications, it can be uniquely handy when mapping one a shape from Cartesian space into a new field-driven parametric space. For example, consider the toolpath geometry for the saddle surface below. Two pairs of side walls \(U\) and \(V\) form two two-body fields, which, when multiplied by a constant characteristic length, creates a \(UVW\) coordinate space along with the the distance to the midsurface of the reference geometry, \(W\).

If working with oriented open or nested shapes, several two-body fields map be combined into larger piecewise continuous maps.

When observing the two-body field above, you might have noticed that the circles inside the blue circle aren’t concentric. Let’s simplify the situation down to the two-body field of two points:

*Apollonian circles and conic sections. Only between two points to we see an Apollonian family of circles, but between a circle and a line, we see the full family of conic sections: ellipses occur near the circle, hyperbola occur near the line, and a parabola appears at \(\Xi = 0\). Observe that the two-body parameterization creates constant spacing along the horizontal axis containing the circle centers.*

If these circles look familiar, they are members of the family (pencil) of Apollonian circles which sometimes appear in engineering applications. These circles are described by curves that are the constant ratio of distance to two circles, and indeed, the two-body field may be suitably reparameterized:

\[\frac{\df{A}-\df{B}}{\df{A}+\df{B}} = t \quad\iff\quad \frac{\df{A}}{\df{B}} = \frac{1+t}{1-t} \,.\]When working with SDFs, our two-body parameterization has the useful property that the fields are evenly spaced, which is useful for both engineering and aesthetic applications. (This reparameterization is a one-dimensional Cayley transform, which often appears in hyperbolic geometry.)

As noticed by Ponce and Santibáñez, ratios of distance fields generalize conics from points, circles, and planes to arbitrary shapes, and UGFs further generalize those results beyond distance fields.

The sum and difference fields represent clearance and interference when applied to SDFs. When applied to UGFs, the two-body field, the ratio of the sum and difference fields, is a straightforward approach to setting up mapping spaces in engineering applications. Mysteriously, notes of conformal mapping and complex analysis appear to present themselves, which can be useful when working on combined engineering and aesthetic challenges.

]]>3DPrint.com: LatticeRobot Launches a Home for Lattices, Metamaterials, and Textures

Develop3D: LatticeRobot announces community for advancing lattices in products

TCT: LatticeRobot launches engineering community for lattice research and knowledge share

GCL is an incubator founded by Luke Church, myself, and some friends to help advanced engineering software, a notoriously challenging business. The best leaders of engineering software companies are not usually the best builders of engineering software, and we close that gap by helping founders build great, scalable products. In the process of building companies, we are also developing background IP in implicit modeling, user interaction, and machine learning to future accelerate future application development.

Lattices offer the potential to change the world of advanced manufacturing, but a lack of common knowledge impedes their application. LatticeRobot closes this gap by bringing a community of engineers together in a computationally enhanced working space to aggregate and explore the world’s knowledge of lattices, textures, and related mesoscale geometry and applications.

LatticeRobot’s interactive environment helps engineers explore what combination of base materials and lattice geometries create data-driven results. It combines lattice geometry and empirical, functional data to produce optimized implicit unit cells that work with modern latticing software. Data supplied by hardware, software, and consulting vendors refers users back to the referenced products and services, helping users discover the most fit products and services for their applications.

Hear it from the team:

While we’ll be busy with LatticeRobot for most of 2023, we have a few hunches about the future of engineering software that we continue to explore. We expect to productize aspects of that research while working with market-driven founders to bring the next generation of design, engineering, and manufacturing software to market. If you have a vision to address an undeserved engineering market, consider reaching out.

]]>
$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\point}[1]{\mathbf{#1}} $$
$$ \newcommand{\p}{\point{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\sgn}{\mathop{\rm sgn}} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

About a year later, at the earliest stages of designing a CAD system that would become known as “SpaceClaim,” I caught up with Frisken and Perry, who were interested in better understanding the viability of ADFs in engineering applications. At MERL, Frisken and Perry mentored me in implicit modeling, and I had the pleasure of working with Kizamu, their prototype modeler that delivered real time interaction at remarkably high fidelity compared to the state-of-the-art “voxel” kernels in Electric Image Amorphium and SensAble Freeform. Although we established that ADFs were not ready for mechanical design, we gained enough confidence to pursue 2D applications. ADFs and derived technology flourished in 2D, proliferating in font representations promoted by (Agfa) Monotype that targeted mobile devices and in drawing applications such as Mischief, authored by Frisken and acquired by The Foundry.

In the 2000s, precise B-rep kernels saw significant development to enable interactive modeling. SpaceClaim and its cohorts’ direct solid modeling paradigm caused local operations to be exercised far more than the history-based approach, and vendors spent years gusseting their B-rep kernels for the purpose. Although B-rep modeling became much more robust, its boundary-based (Lagrangian) nature ensured that a long tail of edges cases would always detract from uninterrupted, interactive B-rep editing.

Direct modeling permitted a more flexible data model, and SpaceClaim’s architect, David Taylor, fastidiously and passionately built a beautiful API that was a sort of homage to B-reps themselves. Inspired by the first generation of generative design tools such as Grasshopper and artists such as Jessica Rosenkrantz and Jesse Louis-Rosenberg at Nervous System and implicit art pioneer Bathsheba Grossman, I attempted use the API to produce generative art on top of B-reps, but became frustrated as I only further exposed their weaknesses. For example, to successfully union a few thousand cylinders of identical diameter into a lattice requires adjusting radii by microns to enable some Booleans to succeed.

*This boundary representation, produced from the edges of the non-manifold mesh, contains 8760 faces, 24,270 edges, and 14,181 vertices. About two-thirds of the faces are small triangles producing features below fabrication resolution. To successfully boolean this solid shape in a contemporary B-rep modeler, the process must be decomposed into a sequence of unions of subsets of the initial shapes, and some faces must be slightly offset to remove singularities. This geometry, the mesh of a Boy’s surface, is derived from the optimal minimax eversion by Francis and Sullivan and was modeled and rendered in ANSYS SpaceClaim.*

In 2014, the 3D printing juggernaut Stratasys acquired the incredible team at GrabCAD, with whom I was proud to serve, and provided the challenge of making their prototyping technology more suitable for end-use parts. Few tools were available at the time that would permit volumetric control of structures, but ImplicitCAD and Monolith enabled the creation of some spatially varying lattices using simple distance functions, generating compelling results at small scale.

In June of 2015, our head of software, Jon Stevenson, introduced me to a hardware research team working on an electrophotography-based printer that needed their soluble support structures to dissolve faster. They were printing model material inside a solid block of support material, so we discussed adding lattices that would transition to solid near the part. Although I wasn’t sure at the time how to construct a distance field even in 2D, I was able to hack a simple slicer out of a GPU-based art project and compute a distance-like field by applying Gaussian blur to cross sections of a mesh, which could then modulate implicit gyroids. While the slices looked decent to me, the hardware team was initially underwhelmed, as they also needed the supports to transition to solid beneath the part. Fortunately, I was aware of the existence of depth buffers, and somehow produced a stack of slices by combining the depth buffer and Gaussian blur into a pseudo-2.5D distance field. The team fabricated a part using the first set of slices delivered, which printed successfully with supports dissolving orders of magnitude faster than before, removing a serious business impediment. We named the new slicer “Photon.”

The next week, I found myself talking to a different team on the other side of the world with a novel two-material system that required printing a minor fraction of support material, or a step of their process would explode. The code base of the old art project already dynamically loaded the shaders doing the heavy lifting, so I wrote a new shader that used the pseudo distance field (with a second depth pass from the top) to modulate subdivision of space-filling solids with thin gaps. The first prints from this approach were also successful.

Having supported two hardware teams on a whim, I sheepishly approached GrabCAD’s head of engineering, Amos Benninga, to inform him of the liabilities I’d potentially created. He was delighted, and offered me a small team if the stack could also support PolyJet, Stratasys’ flagship multi-material jetting technology. The next day, I handed him a Photon-sliced PolyJet part.

Steve DeMai joined as lead engineer on Photon, evolving it from a Node/Electron art project to a professional, optimized C# application. I figured out how to replace the Gaussian blur with a narrow-band distance field by brute force sampling, but performance was terrible. Steve implemented several methods, ultimately landing on the sweep-based lower envelope distance algorithm and, with Nur Arad, realized he could precondition it with the depth-buffer, giving us a full resolution, accurate, signed 3D distance field of entire slices in real time.

*For this limited compliance assembly, the untrimmed individual repeating unit was produced with Monolith, then assembled, trimmed to shape, and packaged for manufacturing in SpaceClaim. (We’ll eventually get to a post on such ramped, folded structures.)*

As far as I’m aware, this was the first point in history that an implicit modeler produced distance fields from imported meshes at the full resolution of high-fidelity manufacturing hardware. It’s a moment I had estimated far further into the future, but made possible by the hardware’s relative coarseness compared to traditional manufacturing processes. As impressive as PolyJet’s 600DPI resolution may have been, the \(~40\,\mu m\) resolution is about two orders of magnitude coarser than the worst case in most engineering software implementations. Over three dimensions, and coupled with the fact that we only needed to slice as fast as printers could print, our modeling stack delivered end-to-end results with eight or nine orders of magnitude less computing than I’d predicted. The future was now!

While Photon’s main job was to prototype techniques to slice parts, generate supports, and produce output for novel printing processes, my interest lay in generative solid modeling. Our team had two interns just out of high school, Brenna Sorkin and Bradley Stevenson, who started implementing our modeling library. Brenna sorted through Vadim Shapiro’s notes on Rvachav’s R-functions to implement blends, and Bradley found expressions for primitives that somehow had more `if`

statements than the primitives had topology. The art project had a bunch of noise functions copied from all over the internet, which combined with periodic lattices, TPMS such as gyroids, and Brenna and Bradley’s contributions, constituted a pretty awesome modeler. Much of that prototype library gave way to Inigo Quilez’s elegant haiku of distance field primitives, about which I was oblivious until at least 2016.

With three bitmap-based printing platforms initially supported, we hooked up marching squares and cubes to produce high-resolution output for toolpath-based printers and CAD. There was strong affinity with Stratasys’ need for novel support structures and my interest in applying generative thinking and implicit modeling to mechanical engineering challenges. New materials and hardware configurations created endless demand to create more slicers that could synthesize supports and fine-tune process issues. Creating working hardware and electronics was typically the main concern of the engineering teams, so I found myself not only producing slices, but also working through the process engineering, a discipline rife with contradictions and exceptions. (For example, it’s common to both want to support a model from below and also create an in-layer air gap between the model and support materials. In regions with sloped overhang, there’s a contradiction between the air and the supports, and different process situations call for different choices.)

Support structures require precise control over offsets and taper, some of the most error-prone operations in traditional B-rep-based CAD technology, so we developed techniques to handle them correctly. Challenges like the H-2000 prototype separated the problems of gravitational and build process continuity, resulting in support geometry that resembled multi-pull injection molded parts. Meanwhile, we had other interesting challenges that weren’t easily described with conventional distance fields, like needing to know the distance from the silhouette of the projection of a shape to create the coated PolyJet supports.

Only armed with signed distance fields, we would have been forced to approximate some of the fields needed to construct these results. Coincidentally, the intermediate data Photon used to construct the 3D distance fields also facilitated computation of the needed auxiliary fields. For example, in the process of building the 3D field, we could also generate the 2D distance field of data only on the plane. Using the depth buffer information, we produced a 1D signed directional distance to our shape from above or below. Similarly, we could construct a distance to the contour or the silhouette in a given slice plane. These “foliated fields” (future post) became indispensable in enabling draft and making appropriate choices in situations like the underhang contradiction mentioned above.

*A texture-mapped rabbit (right) intersected with gyroid TPMS, with the intersection blended, and printed on a Stratasys PolyJet jetting system with five color resins. The original model is a sample provided with Materialise Magics software and was sliced and rendered with Implicit Space (right).*

For each of these these fields, Steve had provided a data structure that included not just the distance, but also the closest point and therefore the gradient. At the closest point, we knew the surface texture color, surface normal, and ID of the body and face topology. We could support an arbitrary number of juxtaposed fields for different bodies or body types. When manual input was required in delicate processes, we could easily tune the builds using color maps or 3D Photoshop jobs. Each slicer could be configured with custom parameters and options to enable a surprising amount of control and interactivity.

The ultimate application of Photon came from Daniel Dikovsky, who had been experimenting with a family of eight PolyJet materials that, when printed together, could reproduce the material properties of human bone, soft tissue, and blood. We would layer low-fidelity, segmented MRI data of organ structures, assign each layer a tissue type, and paint on layers of embellishment as needed. The biomimetic slicer would then add mesoscale structure, interpolate it between the layers of geometry and modulate it with metadata fields, knitting together complex assemblies of organ structures. The geometry would often be imprecise and overlap, and we often needed to fix clearance and interference between the meshes. It was in this setting that I first started to compare distance fields, investigating the difference field (midsurface), sum (clearance), and their ratio, the “two-body field” that interpolates between shapes (topic of next post). We were quickly achieving results that would have been inconceivable with meshes alone, such as automatically dilating small vasculature to meet minimum manufacturing conditions or designing intentional structural defects to mimic pathological situations.

*A biomimetic femur produced with prototype software and materials of the Stratasys PolyJet Digital Anatomy system. The wall thickness varies along the length of the bone, and the trabeculae (lattice) on the proximal epiphysis (round end), which would be too fine to produce at scale, are homogenized via cellular (Worley) noise.*

Eventually, we also solved toolpathing, cost estimation, color calibration, and material property management challenges in Photon, much of which lives on in Stratasys’ commercial products and its spinoffs.

In the spring of 2017, I met Bradley Rothenberg, founder of nTop (née “nTopology”), at the COFES gathering of engineering software technologists. nTopology’s first product, Element, was a runaway success for latticing 3D printed models, and Brad was interested in expanding the vision to a broader set of applications. I started to advise the company as they were raising their A round, and we upgraded the pitch to generalize generative technology beyond additive. With funding achieved, I visited nTopology’s original office on Lafayette Street in SOHO, New York City, to help align product strategy to the expanded scope. Element had been hitting a resolution wall with the popular and excellent OpenVDB kernel, so we had a whiteboard discussion over the advantages of B-reps, meshes, and implicits. The team decided to give implicits a shot, creating a beautiful block-based user experience with evaluation initially powered by Matt Keeter’s libfive library.

*An application of UGFs on spatially-varying, multiscale, geometry designed for 5-axis deposition and created and rendered in nTop (left), and an example from industry produced by Electroimpact SCRAM. UGFs maintain constant gaps between walls while the surrounding geometry is warped into a curved space, the map of which is defined by a distance field and two orthogonal two-body fields. This nTop demo video demonstrates the approach.*

The results were impressive, as nTop could render in seconds 3D models that would take Photon hours. I joined full time to build nTopology’s product team and ready the product for market before settling in as CTO, where I spent most of my time applying “nTop” to diverse mechanical design challenges. nTopology’s engineering team continued to accelerate performance and increase resolution, and the flexibility of the block system enabled us to quickly prototype new applications.

The initial set of nTop implicit modeling routines included a decent set of primitives and blended Booleans that still represent the state of the art. As with Photon, careless use of these blocks would produce unintended results. While implicits may never fail, garbage in is still garbage out, perhaps without the error message one would see doing something similar with boundary representations. Just as with SpaceClaim, where interactive modeling exposed weaknesses in the B-rep paradigm, nTop’s expedient user interface and diverse user base exposed usability issues with implicit modeling. Without care, common modeling operations such as offsets, Booleans, and blends, while unbreakable, can leave unexpected artifacts in the remote field that may appear as defects in subsequent operations. In addition, popular implicit fields like gyroid lattices are not distance fields, creating further aberrations. To make implicit modeling accessible, nTopology created specialized tools that do the right thing in everyday workflows.

Over time, as engineering tools become higher fidelity and increasingly interactive, they propel their users to higher vantages from which they gain better control over their work. When waiting overnight for a result, as with Photon, the surprise of an oddly-shaped blend causes less notice than in real-time physically-based rendering in nTop. When making support structures and mechanical demos with Photon, such details were negligible, but when designing real parts in nTop, they were hard to avoid. With the initial focus on latticing and topology optimization use cases, the core tool in nTop delivered results, and several of us strove to facilitate trickier situations such as modulating warped lattice parameters.

To work around early performance and limitations in nTop that have since been overcome, I prototyped a new gradient-aware stack on top of a new project Steve DeMai was interdependently pursuing, “Implicit Space”. (The work was sufficiently contemporaneous with Quilez’s post on gradients that I yet again just missed using his results.) Over a few releases, nTop received more tech to compensate transformations, culminating in a new pipelines that abstracted the grunt work of modulating warped lattices. With a “new lattice pipeline” and breakthrough interactivity in nTop’s third release in the spring of 2021, nTop felt like a complete product for lattices and top opt. Although there wasn’t a single technology that addressed the issues, a common theme became keeping the gradient magnitudes (Lipschitz coefficients) near unity.

*A Worley noise field displaces a torus. On the left, there are overhangs with respect to surface torus gradient and disconnected topology. On the right, the noise field has been composed with the torus’ boundary map, so no overhangs or disconnected topology are present.*

Through the COVID years, I dedicated more time to working on recapitulating the full suite of rounds, chamfers, and drafts expected in conventional boundary representation modelers for mechanical design. Although the \(\func{Radiate}\) operation (remapping through via the boundary map) had proved useful at Stratasys for monotonic texturing, I found it could construct isocline draft surfaces as well as rolling-ball rounds via what I would learn to be called “the normal cone” (last post). At the time, the constructions were missing the correction factors needed to conserve unit gradient magnitudes, so I reached out for assistance to Vadim Shapiro, the peripatetic professor from UW-Madison, CEO of the pioneering simulation startup Intact Solutions, and editor of The CAD Journal. For over a decade, Vadim has helped fill gaps in my analytical acumen while I’ve provided business advice to Intact. He quickly responded to my long email which was brief on descriptions but included lots of pretty pictures of cross sections. His response was something like:

“I have no idea what you’re talking about.”

So I tried to explain a bit more about the radiated field construction, because I thought I was just looking for a bit of standard differential geometry. However, he replied with:

“I still have no idea what you’re talking about, but your techniques appear to be new. It sounds like you are trying to say something like…” and he proceeded to state a little proposition and a proof. “You need to write a paper. Let’s talk on Monday.”

On Monday, Vadim started to cajole me into this project. He promised to help, but only if I stopped sending him rambling emails, typeset everything in \(\LaTeX\), and produced a paper. He foretold that with this endeavor, I would achieve a much clearer understanding of the concepts and produce higher fidelity results. I spent the fall of 2022 and winter of 2023 expanding this “paper” to a somewhat technical manuscript, which feels like it’s about half done, but I keep finding practical and curious side roads to explore. Although the work so far has been receiving good feedback, most of my friends and colleagues prefer a simplified treatment, which is why I’ve decided to ship the most fun and useful parts of the book in these blog posts.

]]>*If the controls aren’t working on your device, click through to ShaderToy*

With `SDF`

enabled and `offset`

non-negative, the *boundary map* arrow always points to the boundary, via the distance intersection of two planes, \(\df{F} = \plane{A} \distanceCap \plane{B}\) . With `SDF`

disabled, the points in a region opposite of the vertex, the *normal cone* of the intersection, fail to point at the boundary. The normal cone, shown in green, is the set of points closest to the sharp intersection. Similarly, with `SDF`

enabled and negative `offset`

, the boundary map of points in the normal cone of the intersection trace out the classic *swallowtail* failure mode of offsetting chains of curves with fillets.

Let’s nail down some definitions to at least a SIGGRAPH level of rigor. For nuanced definitions, see Luo, Wang, and Lukens’s framing of SDFs using Variational Analysis.

Fields are functions mapping a smoothly curved space, usually \(\R^n\), to the affinely extended reals \(\overline\R \equiv \R \cup \{\pm\infty\}\). If you haven’t seen extended reals before, it turns out that you can do the hokey-pokey with analysis and simply define the ends of the real number line to be closed instead of open, even dividing (anything other than zero) by zero; feel free to resent your third grade teachers and find new brilliance in IEEE floating point representations.

Unit Gradient Fields (UGFs) are simply fields with unit gradient magnitude, everywhere the gradient exists. Although we usually use them to represent shapes, there is no need for them to have any non-positive values, as adding a constant to a UGF doesn’t change its gradient.

Distance fields (DFs) are defined by the difference of the unsigned distance to a set minus the unsigned distance to the set’s complement, noting that the distance to a set from its inside is zero. The piecewise definition is \(C^1\) continuous (where differentiable) across the set’s boundary. DFs are UGFs when defined by *proper* sets. (The two improper sets, the null set and the set of all points in a space, generate the distance fields \(+\infty\) and \(-\infty\), respectively.) When DFs have an interior, we will call them “signed” *SDFs*. (We avoid the term “UDF” due to its similarity to “UGF”.)

DFs contain more information about a shape than a UGF representing the same shape. For example, the sum of two DFs represents the local clearance between parts. These properties derive from the key fact about DFs: their boundary map always points to their boundary.

The boundary map, represented by the black arrow in the visualization, is simply the map from point \(\p\) to the closest point on the surface of a set represented by a DF \(\df{F}\):

\[\BoundaryMap{\df{F}}(\p) = \p - \df{F} \;\, \grad\df{F}\]Distance fields can be thought of as the magnitude of the boundary map vector fields, and any UGF that is a boundary map represents a distance field.

We can constuct the normal cone as illustrated above using the boundary map. Given two SDFs \(\df{A}(\p)\) and \(\df{B}(\p)\) the normal cone for \(\ugf{F} = \df{A} \distanceCap \df{B}\) becomes:

\[\NormalCone{\ugf{F}} = -\df{A}\left(\BoundaryMap{\df{B}}(\p)\right) \minmaxCap -\df{B}\left(\BoundaryMap{\df{A}}(\p)\right)\]Plane fields are a special case of SDFs to planar half-spaces with the special property that they are everywhere differentiable.

In this series, we’ll use a secondary notation to remind ourselves of the properties of fields:

Plane fields |
\(\plane{P}\) |

Distance field |
\(\df{D}\) |

Unit gradient field |
\(\ugf{U}\) |

Unit gradient field at zero |
\(\augf{A}\) |

The latter, we’ll refer to as *approximate UGFs* (AUGFs). Any field with non-vanishing gradient can be converted to an AUGF via *Sampson normalization* (Sampson 1982):

We will often generalize properties of planar intersections to behavior near the zero isosurface of AUGFs.

So far, we’ve seen minmax, distance-based, and, in the last post, chamfered Booleans that preserve UGFness. There are also many useful fast and reliable Boolean operations that produce results that are not UGFs.

*(Direct link to Shadertoy if preview failing.)*

We’re going to need some notation to keep the different flavors of Booleans straight. Let’s focus on the Union or \(\min\) operation, as the intersection can be defined as the complement of the union of the complement of the inputs:

\[\max(A, B) = -\min(-A, -B) \;.\]First, nodding to Rvachev and logic functions, we can define the minmax Booleans \(\minmaxCup\) and \(\minmaxCap\) using \(\min\) and \(\max\). Similarly, we can define the DF-preserving Booleans, \(\distanceCup\) and \(\distanceCap\), which are defined piecewise across the boundary of the normal cone. Outside of the normal cone, the distance result is the same as the minimax Booleans, but inside it sees the distance to the curve of intersection.

It’s worth comparing the DF blend to common implicits blends in the graphics community. Korndörfer gives perhaps the most elegant, in which the entire remote quadrant of the intersection receives the blend instead of the normal cone, a subset of it:

\[\ugf{A} \euclideanCup \ugf{B} \equiv \max\!\left(\ugf{A} \minmaxCup \ugf{B}, 0 \right) \;-\; \norm{\left(\min(\ugf{A}, 0),\strut\min(\ugf{B}, 0) \right)} \;,\]where \(\norm{\cdot}\) is the Euclidean norm of a vector of fields. We’ll use variants of the traditional union and intersections symbols for blended or rounded intersections.

Quilez provide several examples of “smooth minimum functions” that blend the entire discontinuity typically produced by \(\min\). With constant blend radius, they do not repeat the logic of \(\min\) and \(\max\), but by using an estimate of distance-to-curve for their intersection, we can produce a logic-preserving minimum. This radius variation works on Quilez’ polynomial and exponential \(\func{smin}\):

\[\func{smin}\left(\ugf{A}\,, \ugf{B}\,, \abs{\ugf{A} \, \ugf{B } \; (1 - \grad{\ugf{A}} \,\cdot \grad{\ugf{B}})}\right) \;.\]The sum and difference of fields and the distance to intersections curves will be further explored in future posts.

Rvachev, as popularized by Shapiro first identified and classified the concept of logic-preserving implicit functions, named *R-functions* after him. In this example, we’re showing \(\vee_0\) in Rvachev’s notation:

Note that \(\vee_0\) is an AUGF, despite its remote field departing quickly from unit magnitude.

For most applications not requiring UGFs, the Euclidean blend works well, so we won’t continue with Quilez or Rvachev blends in this series. We will get to chamfers in a future post (which use squared notation due to the extra edge), so let’s define a common set of notation for the family of R-function Booleans available in edge treatments.

Minmax |
\(\minmaxCup\) | \(\minmaxCap\) |

Distance-preserving Boolean |
\(\distanceCup\) | \(\distanceCap\) |

Euclidean blend (Korndörfer) |
\(\euclideanCup\) | \(\euclideanCap\) |

Chamfer (minmax intersections) |
\(\chamferMinmaxCup\) | \(\chamferMinmaxCap\) |

Chamfer (distance-preserving) |
\(\chamferDistanceCup\) | \(\chamferDistanceCap\) |

Chamfer (Euclidean blend) |
\(\chamferEuclideanCup\) | \(\chamferEuclideanCap\) |

Arbitrary (any of the above) |
\(\arbitraryCup\) | \(\arbitraryCap\) |

As a preview to future posts on edge treatments, see these two social media threads:

*While I’m a fan of John Nash’s work, this portrayal never landed for me. That said, I did question my sanity while working on the diagram below.*

Given a few different grades of fields and a set of operators, one might wonder if there’s any structure worth noting. For example, the distance-preserving Boolean maps DFs to DFs, while the minmax boolean map DFs to UGFs. Here’s my attempt to document the structure of the system, with a few operations to be defined in later posts:

DFs and UGFs to a shape can only differ in shapes’ normal cones that arise on non-smooth boundaries. In this post and the last, we focused on these sharp (edge-like) regions and offsets to help clarify that all fields with unit gradient magnitude are not DFs. There’s more fun to be had with edges and edge treatments, but perhaps in the next posts we’ll visit some of the tricks that work only with UGFs and some techniques for creating UGFs to new shapes.

Please keep the feedback coming!

]]>Take these three examples of an offset rectangle, created using three different “line joining” approaches that date back to the early days of 2D graphics and are built into your browser:

The red rectangle on the left uses extensions of the rectangle’s edges. This approach is similar to what we expect from B-rep and most mesh modelers, where extra faces are only added when needed. One might not notice that the offset vertices are actually a factor of \(\sqrt{2}\) farther from the vertex than the sides. In engineering applications, we often prefer the topological simplicity of such *naturally extended* intersections to geometric correctness.

In the middle, the green rectangle offsets geometrically, with a rounded corner the exact Euclidean distance to the vertex. When modeling with SDFs, we expect these geometric offsets, but the result may surprise engineers who prefer the simplicity of naturally extended corners.

As a third example, although not normally considered a common option during offset in engineering applications, offset can produce a chamfered result. Are there other possibly legitimate options for how one might want to treat a corner?

When modeling with fields, we often “offset” a field \(F\) as an alternative to offsetting the boundary of the shape \(\shape{F}\) it represents.

(In our notation, one can convert a shape to a distance field via \(\df{F} = \DF \shape{F}\) or extract a shape from the non-positive region of a field via \(\shape{F} = \Shape F\). We also annotate planes \(\plane{P}\), distance fields \(\df{D}\), and unit gradient fields \(\ugf{G}\) to distinguish them from general fields \(F\). The next post will cover these topics.)

We define the *offset of the field* \(F\) *by constant distance* \(\lambda\):

so the distance the zero isosurface moves depends on the gradient of \(F\). One can think of the offset behavior as baked into the field’s geometry itself, not something one can do to the geometry, as with boundary modeling. If we’d like different edge treatments on different edges, somehow we need to produce a fields with the proper behavior encoded in them ahead of time.

For example:

Clearly, only the top right corner represents an SDF. In this example, the gradient, where defined, always has unit magnitude, as observed by the 1:1 slope in the field’s *epigraph*, \(\planeSm{z} - F(\planeSm{x}, \planeSm{y})\):

A similar situation presents itself in 3D. How the edges of the cube propagate when this spherecube is rounded is predetermined by the field surrounding the cube, yet it’s not visible on the nominal geometry:

The left option is most common with B-rep modelers when rounding an edge, and some provide the option to round the blend, as seen in the center. UGF modeling, however, provides a unique ability to add more control, as expressed with the chamfered alternative on the right.

UGFs, fields with unit gradient magnitude (where the gradient is defined), offer a generalization over SDFs with the appropriate amount of flexibility for many engineering applications. They also overcome the greatest weakness of SDFs: closure.

The offset of a distance field is not, in general, a distance field. Starting with a SDF, if one offsets a convex edge inward or a concave edge outward, the result is a field that no longer represents the distance to the isosurface. (We’ll dedicate a post on this topic soon, but if you wanted an exercise, this would be it.) Similarly, as observed by Inigo Quilez, Booleans, as produced by \(\min\) and \(\max\), do not produce distances. Most other common operations, including blending, smoothing, interpolating, variable offsetting, warping, and even scaling all can introduce artifacts that cause subsequent operations to behave unpredictably. Finally, we can also construct UGFs from other UGFs in ways that are not possible with SDFs.

Over the past year or so, I’ve been gathering my implicit modeling practices into a manuscript unified by organizing principle of UGFs and how they relate to SDFs and their relatives. Now that the book has achieved critical mass, I thought I’d start to introduce UGFs thought a blog series. With the release of nTop 4.0 and its new spherecube and UGF themed logo, as well as in anticipation of conversations at CDFAM 23*, it’s time to start talking about the project!

* Note: while the nTop logo, as most of the images on this page, was designed in nTop, the CDFAM logo appears to have been generated using machine learning.

Let’s take a look at a less theoretical example, where we want to control different edges with different edge treatments. Here, we’re also using UGFs to produce drafted faces and a lip feature, as common in molding applications.

Each Boolean operation offers an opportunity to choose what kind of edge treatment is appropriate, establishing downstream design intent. When modeling with UGFs, it is natural to encode such design intent sooner than with explicit modeling, enabling downstream operations to behave predictably and be automated.

What makes UGFs special? Couldn’t we always do these tricks? Yes, but without a resolute focus on maintaining unit gradient magnitude, the results would not produce the constant offsets, circular rounds, and predictable results we associate with engineering software.

So stay tuned through June as I walk you through more explanation of the above and through diverse application such as:

- How to use UGFs to condition fields for downstream use when producing lattices, edge treatments, and thin-walled geometry.
- Parameterizations possible from UGFs, such as the two-body field and the orthogonal sum and difference fields.
- Everything you want to know about implicit edge treatments, including rolling-ball blend and chamfer.
- Distance-preserving ramp and other transformations.
- Curve-driven and isocline draft.

And any discussion topics that I hope come up in conversations along the way!

So to summarize: UGFs are like SDFs but are more useful, enabling a more expressive language for implicit modeling in engineering and closure in modeling with distance fields. By focusing on UGFs instead of SDFs, we can put implicits to work in more engineering applications.

*Closing image: multiscale, spatially-varying FDM/FFF infill with constant weld offsets:*

As the cost of fabricating high-fidelity, quantum-collapsed geometry becomes increasingly prevalent, as an industry, we’re forced to confront the following challenge: where are we going to put all of our crap? At QE3D, we’re hard at work to collapse wave functions in our commitment to enable more engineering design space.

You may wonder: how does the superposition of quantum entanglement and machine learning scintillate more room for our everyday carry? Indeed, our technology achieves for consumer products, implantable electronics, wearable devices, and sub-dermal surveillance exactly the same advantages parachute pants achieved for break dancers. With the supremacy of the mesoscale fully realized via TPMS, spinodal decomposition, and mixed topology lattices, from what extra space might we draw additional engineering acumen?

At QE3D, we manifest our quantum technology through three regimes for AI-driven, dimensionality enhanced, spatial domains.

Daniel Piker introduced me to the concept of transforming otherwise Euclidean three-dimensional objects via stereographic projection onto the Riemann hypersphere. Astonished by the myopia of the three.js community, I launched four.js, which has mostly enjoyed success in the dimension orthogonal to this reality.

Throwing differential geometry out the window, we apply variational analysis to fractal boundaries with fractional Hausdorff dimension, creating a countable class of nooks and crannies. We can apply Stokes equations using Monte Carlo techniques, which speak the language of quantum. This approach is particularly useful when combined with topology optimization, as realized in this lovely tufted furniture collection by EvilRyu

Recall that in a Riemannian manifold, which in infinitely differentiable, the curvature at any point maybe positive negative, or hyperbolic. As we increase in dimension, the hyperbolic case becomes more prevalent. This trend offers the opportunity to put that extra space to work! On a hyperbolic point of a surface, for example, a circle drawn around that point has a larger circumference that the same radius circle on a flat (Euclidean) surface, which is in turn larger than a circle of the same radius in spherical space. That larger circumference gives us more space to place objects, and the same is even more true as you increase dimension.

For example, consider the circles on this hyperbolic surface from Keenan Crane’s course on discrete differential geometry.

In any number of dimensions, we can use the sum and difference fields of two distance fields \(\df{A}\) and \(\df{B}\):

\[\df{A}+\df{B} \;,\]and:

\[\df{A}-\df{B} \;,\]to create conformal maps between any two sets represented by \(\df{A}\) and \(\df{B}\), such as the conformal map between this square and circle:

If you’ve been working in boring old 2D or 3D CAD in Euclidean spaces, you’re probably leaving a lot of performance on the table. At QE3D, our software and slicing stack deconvolves bloated designs directly into high-dimensional, fractional-dimensional, and hyperbolically curved finite matter arrays. Join our wait list today!

]]>