*Use the slider to change viewing modes:*

$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\p}{\mathbf{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

*The clearance field, \(\ugf{A} + \ugf{B}\,\), the midsurface field, \(\ugf{A} - \ugf{B}\,\), and the two-body field: \(\twobody{\ugf{A}}{\ugf{B}} \equiv \frac{\ugf{A} - \ugf{B}}{\ugf{A} + \ugf{B}}\). The clearance and midsurface fields are overlaid to demonstrate their orthogonality.*

There are few concepts to unpack. First, lets just get a feel for why the sum and difference perform as clearance and midsurface fields. Let’s use one-dimensional functions on a line to represent the section between opposing shapes.

$$ \ugf{A} $$ |
-2 | -1 | 0 | +1 | +2 | +3 | +4 | |

$$ \ugf{B} $$ |
+4 | +3 | +2 | +1 | 0 | -1 | -2 | |

$$ \ugf{A} + \ugf{B} $$ |
+2 | +2 | +2 | +2 | +2 | +2 | +2 | |

$$ \ugf{A} - \ugf{B} $$ |
-6 | -4 | -2 | 0 | +2 | +4 | +6 | |

$$ \twobody{\ugf{A}}{\ugf{B}} $$ |
-3 | -2 | -1 | 0 | +1 | +2 | +3 |

Clearly, the sum indicates the clearance. The difference is the midsurface field, but scaled by a factor of two. If one zooms out of the sum field to the far side of either shape, the sum also doubles up far from the midsurface, so we normalize by two in the Shadertoy above.

The two-body field, \(\twobody{\ugf{A}}{\ugf{B}} \equiv \frac{\ugf{A} - \ugf{B}}{\ugf{A} + \ugf{B}}\), clearly ranges in \([-1, 1]\) in the region not contained in either of the shapes, creating a predictable parametric space for modulation, interpolation, and remapping.

In the visualization, the sum and difference fields are clearly orthogonal, but why? Algebraically, it works out trivially enough, recalling that UGFs have unit gradient magnitude by definition and that orthogonal vectors dot to zero:

\[\begin{aligned} \inner{(\grad\ugf{A} + \grad\ugf{B}\,)}{(\grad\ugf{A} - \grad\ugf{B}\,)} &= \\ \inner{\grad\ugf{A}\,}{(\grad\ugf{A}\; - \grad\ugf{B}\,)} + \inner{\grad\ugf{B}\,}{(\grad\ugf{A}\; - \grad\ugf{B}\,)} &= \\ \inner{\grad\ugf{A}}{\!\grad\ugf{A}}\; - \inner{\grad\ugf{A}}{\!\grad\ugf{B}}\; + \inner{\grad\ugf{B}}{\!\grad\ugf{A}}\; - \inner{\grad\ugf{B}}{\!\grad\ugf{B}} &= \\ 1 - \inner{\grad\ugf{A}}{\!\grad\ugf{B}} + \inner{\grad\ugf{A}}{\!\grad\ugf{B}}\; - 1 &= 0 \;. \end{aligned}\]However, those of us from the Tristan Needham school of analysis might prefer a more geometric explanation:

The key observation is that when \(\ugf{A}\,\) and \(\ugf{B}\,\) are UGFs, the sum and difference gradient vectors form the diagonals of a rhombus, and therefore, are orthogonal. Note that this rhombus is contained in the normal cone of the fields’ intersection (green).

The sum and difference fields, \(S = \ugf{A}\, + \ugf{B}\,\) and \(D = \ugf{A}\, - \ugf{B}\,\), produce an orthogonal basis and using the Sampson Norm, \(\sampson{F} \equiv \frac{F}{\norm{\grad{F}}}\), \(\sampson{S}\) and \(\sampson{D}\) form an orthonormal basis, which can be a useful way of approximating distance-to-curve and constructing edge treatments. Perhaps we’ll do a deeper dive on this topic in a future post, but here’s a teaser from some old twitter threads.

The two-body field can be a convenient alternative to SDFs for modulating other fields and interpolating shape. In engineering applications, it can be uniquely handy when mapping one a shape from Cartesian space into a new field-driven parametric space. For example, consider the toolpath geometry for the saddle surface below. Two pairs of side walls \(U\) and \(V\) form two two-body fields, which, when multiplied by a constant characteristic length, creates a \(UVW\) coordinate space along with the the distance to the midsurface of the reference geometry, \(W\).

If working with oriented open or nested shapes, several two-body fields map be combined into larger piecewise continuous maps.

When observing the two-body field above, you might have noticed that the circles inside the blue circle aren’t concentric. Let’s simplify the situation down to the two-body field of two points:

*Apollonian circles and conic sections. Only between two points to we see an Apollonian family of circles, but between a circle and a line, we see the full family of conic sections: ellipses occur near the circle, hyperbola occur near the line, and a parabola appears at \(\Xi = 0\). Observe that the two-body parameterization creates constant spacing along the horizontal axis containing the circle centers.*

If these circles look familiar, they are members of the family (pencil) of Apollonian circles which sometimes appear in engineering applications. These circles are described by curves that are the constant ratio of distance to two circles, and indeed, the two-body field may be suitably reparameterized:

\[\frac{\df{A}-\df{B}}{\df{A}+\df{B}} = t \quad\iff\quad \frac{\df{A}}{\df{B}} = \frac{1+t}{1-t} \,.\]When working with SDFs, our two-body parameterization has the useful property that the fields are evenly spaced, which is useful for both engineering and aesthetic applications. (This reparameterization is a one-dimensional Cayley transform, which often appears in hyperbolic geometry.)

As noticed by Ponce and Santibáñez, ratios of distance fields generalize conics from points, circles, and planes to arbitrary shapes, and UGFs further generalize those results beyond distance fields.

The sum and difference fields represent clearance and interference when applied to SDFs. When applied to UGFs, the two-body field, the ratio of the sum and difference fields, is a straightforward approach to setting up mapping spaces in engineering applications. Mysteriously, notes of conformal mapping and complex analysis appear to present themselves, which can be useful when working on combined engineering and aesthetic challenges.

]]>3DPrint.com: LatticeRobot Launches a Home for Lattices, Metamaterials, and Textures

Develop3D: LatticeRobot announces community for advancing lattices in products

TCT: LatticeRobot launches engineering community for lattice research and knowledge share

GCL is an incubator founded by Luke Church, myself, and some friends to help advanced engineering software, a notoriously challenging business. The best leaders of engineering software companies are not usually the best builders of engineering software, and we close that gap by helping founders build great, scalable products. In the process of building companies, we are also developing background IP in implicit modeling, user interaction, and machine learning to future accelerate future application development.

Lattices offer the potential to change the world of advanced manufacturing, but a lack of common knowledge impedes their application. LatticeRobot closes this gap by bringing a community of engineers together in a computationally enhanced working space to aggregate and explore the world’s knowledge of lattices, textures, and related mesoscale geometry and applications.

LatticeRobot’s interactive environment helps engineers explore what combination of base materials and lattice geometries create data-driven results. It combines lattice geometry and empirical, functional data to produce optimized implicit unit cells that work with modern latticing software. Data supplied by hardware, software, and consulting vendors refers users back to the referenced products and services, helping users discover the most fit products and services for their applications.

Hear it from the team:

While we’ll be busy with LatticeRobot for most of 2023, we have a few hunches about the future of engineering software that we continue to explore. We expect to productize aspects of that research while working with market-driven founders to bring the next generation of design, engineering, and manufacturing software to market. If you have a vision to address an undeserved engineering market, consider reaching out.

]]>
$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\p}{\mathbf{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

About a year later, at the earliest stages of designing a CAD system that would become known as “SpaceClaim,” I caught up with Frisken and Perry, who were interested in better understanding the viability of ADFs in engineering applications. At MERL, Frisken and Perry mentored me in implicit modeling, and I had the pleasure of working with Kizamu, their prototype modeler that delivered real time interaction at remarkably high fidelity compared to the state-of-the-art “voxel” kernels in Electric Image Amorphium and SensAble Freeform. Although we established that ADFs were not ready for mechanical design, we gained enough confidence to pursue 2D applications. ADFs and derived technology flourished in 2D, proliferating in font representations promoted by (Agfa) Monotype that targeted mobile devices and in drawing applications such as Mischief, authored by Frisken and acquired by The Foundry.

In the 2000s, precise B-rep kernels saw significant development to enable interactive modeling. SpaceClaim and its cohorts’ direct solid modeling paradigm caused local operations to be exercised far more than the history-based approach, and vendors spent years gusseting their B-rep kernels for the purpose. Although B-rep modeling became much more robust, its boundary-based (Lagrangian) nature ensured that a long tail of edges cases would always detract from uninterrupted, interactive B-rep editing.

Direct modeling permitted a more flexible data model, and SpaceClaim’s architect, David Taylor, fastidiously and passionately built a beautiful API that was a sort of homage to B-reps themselves. Inspired by the first generation of generative design tools such as Grasshopper and artists such as Jessica Rosenkrantz and Jesse Louis-Rosenberg at Nervous System and implicit art pioneer Bathsheba Grossman, I attempted use the API to produce generative art on top of B-reps, but became frustrated as I only further exposed their weaknesses. For example, to successfully union a few thousand cylinders of identical diameter into a lattice requires adjusting radii by microns to enable some Booleans to succeed.

*This boundary representation, produced from the edges of the non-manifold mesh, contains 8760 faces, 24,270 edges, and 14,181 vertices. About two-thirds of the faces are small triangles producing features below fabrication resolution. To successfully boolean this solid shape in a contemporary B-rep modeler, the process must be decomposed into a sequence of unions of subsets of the initial shapes, and some faces must be slightly offset to remove singularities. This geometry, the mesh of a Boy’s surface, is derived from the optimal minimax eversion by Francis and Sullivan and was modeled and rendered in ANSYS SpaceClaim.*

In 2014, the 3D printing juggernaut Stratasys acquired the incredible team at GrabCAD, with whom I was proud to serve, and provided the challenge of making their prototyping technology more suitable for end-use parts. Few tools were available at the time that would permit volumetric control of structures, but ImplicitCAD and Monolith enabled the creation of some spatially varying lattices using simple distance functions, generating compelling results at small scale.

In June of 2015, our head of software, Jon Stevenson, introduced me to a hardware research team working on an electrophotography-based printer that needed their soluble support structures to dissolve faster. They were printing model material inside a solid block of support material, so we discussed adding lattices that would transition to solid near the part. Although I wasn’t sure at the time how to construct a distance field even in 2D, I was able to hack a simple slicer out of a GPU-based art project and compute a distance-like field by applying Gaussian blur to cross sections of a mesh, which could then modulate implicit gyroids. While the slices looked decent to me, the hardware team was initially underwhelmed, as they also needed the supports to transition to solid beneath the part. Fortunately, I was aware of the existence of depth buffers, and somehow produced a stack of slices by combining the depth buffer and Gaussian blur into a pseudo-2.5D distance field. The team fabricated a part using the first set of slices delivered, which printed successfully with supports dissolving orders of magnitude faster than before, removing a serious business impediment. We named the new slicer “Photon.”

The next week, I found myself talking to a different team on the other side of the world with a novel two-material system that required printing a minor fraction of support material, or a step of their process would explode. The code base of the old art project already dynamically loaded the shaders doing the heavy lifting, so I wrote a new shader that used the pseudo distance field (with a second depth pass from the top) to modulate subdivision of space-filling solids with thin gaps. The first prints from this approach were also successful.

Having supported two hardware teams on a whim, I sheepishly approached GrabCAD’s head of engineering, Amos Benninga, to inform him of the liabilities I’d potentially created. He was delighted, and offered me a small team if the stack could also support PolyJet, Stratasys’ flagship multi-material jetting technology. The next day, I handed him a Photon-sliced PolyJet part.

Steve DeMai joined as lead engineer on Photon, evolving it from a Node/Electron art project to a professional, optimized C# application. I figured out how to replace the Gaussian blur with a narrow-band distance field by brute force sampling, but performance was terrible. Steve implemented several methods, ultimately landing on the sweep-based lower envelope distance algorithm and, with Nur Arad, realized he could precondition it with the depth-buffer, giving us a full resolution, accurate, signed 3D distance field of entire slices in real time.

*For this limited compliance assembly, the untrimmed individual repeating unit was produced with Monolith, then assembled, trimmed to shape, and packaged for manufacturing in SpaceClaim. (We’ll eventually get to a post on such ramped, folded structures.)*

As far as I’m aware, this was the first point in history that an implicit modeler produced distance fields from imported meshes at the full resolution of high-fidelity manufacturing hardware. It’s a moment I had estimated far further into the future, but made possible by the hardware’s relative coarseness compared to traditional manufacturing processes. As impressive as PolyJet’s 600DPI resolution may have been, the \(~40\,\mu m\) resolution is about two orders of magnitude coarser than the worst case in most engineering software implementations. Over three dimensions, and coupled with the fact that we only needed to slice as fast as printers could print, our modeling stack delivered end-to-end results with eight or nine orders of magnitude less computing than I’d predicted. The future was now!

While Photon’s main job was to prototype techniques to slice parts, generate supports, and produce output for novel printing processes, my interest lay in generative solid modeling. Our team had two interns just out of high school, Brenna Sorkin and Bradley Stevenson, who started implementing our modeling library. Brenna sorted through Vadim Shapiro’s notes on Rvachav’s R-functions to implement blends, and Bradley found expressions for primitives that somehow had more `if`

statements than the primitives had topology. The art project had a bunch of noise functions copied from all over the internet, which combined with periodic lattices, TPMS such as gyroids, and Brenna and Bradley’s contributions, constituted a pretty awesome modeler. Much of that prototype library gave way to Inigo Quilez’s elegant haiku of distance field primitives, about which I was oblivious until at least 2016.

With three bitmap-based printing platforms initially supported, we hooked up marching squares and cubes to produce high-resolution output for toolpath-based printers and CAD. There was strong affinity with Stratasys’ need for novel support structures and my interest in applying generative thinking and implicit modeling to mechanical engineering challenges. New materials and hardware configurations created endless demand to create more slicers that could synthesize supports and fine-tune process issues. Creating working hardware and electronics was typically the main concern of the engineering teams, so I found myself not only producing slices, but also working through the process engineering, a discipline rife with contradictions and exceptions. (For example, it’s common to both want to support a model from below and also create an in-layer air gap between the model and support materials. In regions with sloped overhang, there’s a contradiction between the air and the supports, and different process situations call for different choices.)

Support structures require precise control over offsets and taper, some of the most error-prone operations in traditional B-rep-based CAD technology, so we developed techniques to handle them correctly. Challenges like the H-2000 prototype separated the problems of gravitational and build process continuity, resulting in support geometry that resembled multi-pull injection molded parts. Meanwhile, we had other interesting challenges that weren’t easily described with conventional distance fields, like needing to know the distance from the silhouette of the projection of a shape to create the coated PolyJet supports.

Only armed with signed distance fields, we would have been forced to approximate some of the fields needed to construct these results. Coincidentally, the intermediate data Photon used to construct the 3D distance fields also facilitated computation of the needed auxiliary fields. For example, in the process of building the 3D field, we could also generate the 2D distance field of data only on the plane. Using the depth buffer information, we produced a 1D signed directional distance to our shape from above or below. Similarly, we could construct a distance to the contour or the silhouette in a given slice plane. These “foliated fields” (future post) became indispensable in enabling draft and making appropriate choices in situations like the underhang contradiction mentioned above.

*A texture-mapped rabbit (right) intersected with gyroid TPMS, with the intersection blended, and printed on a Stratasys PolyJet jetting system with five color resins. The original model is a sample provided with Materialise Magics software and was sliced and rendered with Implicit Space (right).*

For each of these these fields, Steve had provided a data structure that included not just the distance, but also the closest point and therefore the gradient. At the closest point, we knew the surface texture color, surface normal, and ID of the body and face topology. We could support an arbitrary number of juxtaposed fields for different bodies or body types. When manual input was required in delicate processes, we could easily tune the builds using color maps or 3D Photoshop jobs. Each slicer could be configured with custom parameters and options to enable a surprising amount of control and interactivity.

The ultimate application of Photon came from Daniel Dikovsky, who had been experimenting with a family of eight PolyJet materials that, when printed together, could reproduce the material properties of human bone, soft tissue, and blood. We would layer low-fidelity, segmented MRI data of organ structures, assign each layer a tissue type, and paint on layers of embellishment as needed. The biomimetic slicer would then add mesoscale structure, interpolate it between the layers of geometry and modulate it with metadata fields, knitting together complex assemblies of organ structures. The geometry would often be imprecise and overlap, and we often needed to fix clearance and interference between the meshes. It was in this setting that I first started to compare distance fields, investigating the difference field (midsurface), sum (clearance), and their ratio, the “two-body field” that interpolates between shapes (topic of next post). We were quickly achieving results that would have been inconceivable with meshes alone, such as automatically dilating small vasculature to meet minimum manufacturing conditions or designing intentional structural defects to mimic pathological situations.

*A biomimetic femur produced with prototype software and materials of the Stratasys PolyJet Digital Anatomy system. The wall thickness varies along the length of the bone, and the trabeculae (lattice) on the proximal epiphysis (round end), which would be too fine to produce at scale, are homogenized via cellular (Worley) noise.*

Eventually, we also solved toolpathing, cost estimation, color calibration, and material property management challenges in Photon, much of which lives on in Stratasys’ commercial products and its spinoffs.

In the spring of 2017, I met Bradley Rothenberg, founder of nTop (née “nTopology”), at the COFES gathering of engineering software technologists. nTopology’s first product, Element, was a runaway success for latticing 3D printed models, and Brad was interested in expanding the vision to a broader set of applications. I started to advise the company as they were raising their A round, and we upgraded the pitch to generalize generative technology beyond additive. With funding achieved, I visited nTopology’s original office on Lafayette Street in SOHO, New York City, to help align product strategy to the expanded scope. Element had been hitting a resolution wall with the popular and excellent OpenVDB kernel, so we had a whiteboard discussion over the advantages of B-reps, meshes, and implicits. The team decided to give implicits a shot, creating a beautiful block-based user experience with evaluation initially powered by Matt Keeter’s libfive library.

*An application of UGFs on spatially-varying, multiscale, geometry designed for 5-axis deposition and created and rendered in nTop (left), and an example from industry produced by Electroimpact SCRAM. UGFs maintain constant gaps between walls while the surrounding geometry is warped into a curved space, the map of which is defined by a distance field and two orthogonal two-body fields. This nTop demo video demonstrates the approach.*

The results were impressive, as nTop could render in seconds 3D models that would take Photon hours. I joined full time to build nTopology’s product team and ready the product for market before settling in as CTO, where I spent most of my time applying “nTop” to diverse mechanical design challenges. nTopology’s engineering team continued to accelerate performance and increase resolution, and the flexibility of the block system enabled us to quickly prototype new applications.

The initial set of nTop implicit modeling routines included a decent set of primitives and blended Booleans that still represent the state of the art. As with Photon, careless use of these blocks would produce unintended results. While implicits may never fail, garbage in is still garbage out, perhaps without the error message one would see doing something similar with boundary representations. Just as with SpaceClaim, where interactive modeling exposed weaknesses in the B-rep paradigm, nTop’s expedient user interface and diverse user base exposed usability issues with implicit modeling. Without care, common modeling operations such as offsets, Booleans, and blends, while unbreakable, can leave unexpected artifacts in the remote field that may appear as defects in subsequent operations. In addition, popular implicit fields like gyroid lattices are not distance fields, creating further aberrations. To make implicit modeling accessible, nTopology created specialized tools that do the right thing in everyday workflows.

Over time, as engineering tools become higher fidelity and increasingly interactive, they propel their users to higher vantages from which they gain better control over their work. When waiting overnight for a result, as with Photon, the surprise of an oddly-shaped blend causes less notice than in real-time physically-based rendering in nTop. When making support structures and mechanical demos with Photon, such details were negligible, but when designing real parts in nTop, they were hard to avoid. With the initial focus on latticing and topology optimization use cases, the core tool in nTop delivered results, and several of us strove to facilitate trickier situations such as modulating warped lattice parameters.

To work around early performance and limitations in nTop that have since been overcome, I prototyped a new gradient-aware stack on top of a new project Steve DeMai was interdependently pursuing, “Implicit Space”. (The work was sufficiently contemporaneous with Quilez’s post on gradients that I yet again just missed using his results.) Over a few releases, nTop received more tech to compensate transformations, culminating in a new pipelines that abstracted the grunt work of modulating warped lattices. With a “new lattice pipeline” and breakthrough interactivity in nTop’s third release in the spring of 2021, nTop felt like a complete product for lattices and top opt. Although there wasn’t a single technology that addressed the issues, a common theme became keeping the gradient magnitudes (Lipschitz coefficients) near unity.

*A Worley noise field displaces a torus. On the left, there are overhangs with respect to surface torus gradient and disconnected topology. On the right, the noise field has been composed with the torus’ boundary map, so no overhangs or disconnected topology are present.*

Through the COVID years, I dedicated more time to working on recapitulating the full suite of rounds, chamfers, and drafts expected in conventional boundary representation modelers for mechanical design. Although the \(\func{Radiate}\) operation (remapping through via the boundary map) had proved useful at Stratasys for monotonic texturing, I found it could construct isocline draft surfaces as well as rolling-ball rounds via what I would learn to be called “the normal cone” (last post). At the time, the constructions were missing the correction factors needed to conserve unit gradient magnitudes, so I reached out for assistance to Vadim Shapiro, the peripatetic professor from UW-Madison, CEO of the pioneering simulation startup Intact Solutions, and editor of The CAD Journal. For over a decade, Vadim has helped fill gaps in my analytical acumen while I’ve provided business advice to Intact. He quickly responded to my long email which was brief on descriptions but included lots of pretty pictures of cross sections. His response was something like:

“I have no idea what you’re talking about.”

So I tried to explain a bit more about the radiated field construction, because I thought I was just looking for a bit of standard differential geometry. However, he replied with:

“I still have no idea what you’re talking about, but your techniques appear to be new. It sounds like you are trying to say something like…” and he proceeded to state a little proposition and a proof. “You need to write a paper. Let’s talk on Monday.”

On Monday, Vadim started to cajole me into this project. He promised to help, but only if I stopped sending him rambling emails, typeset everything in \(\LaTeX\), and produced a paper. He foretold that with this endeavor, I would achieve a much clearer understanding of the concepts and produce higher fidelity results. I spent the fall of 2022 and winter of 2023 expanding this “paper” to a somewhat technical manuscript, which feels like it’s about half done, but I keep finding practical and curious side roads to explore. Although the work so far has been receiving good feedback, most of my friends and colleagues prefer a simplified treatment, which is why I’ve decided to ship the most fun and useful parts of the book in these blog posts.

]]>
$$ \newcommand{\R}{\mathbb{R}} $$
$$ \newcommand{\p}{\mathbf{p}} $$
$$ \newcommand{\shape}[1]{\mathcal{#1}} $$
$$ \newcommand{\grad}{\boldsymbol{\nabla\!}} $$
$$ \newcommand{\BoundaryMap}[2][]{\vec{\mathbf{Q}}_{#2}^{#1}} $$
$$ \newcommand{\NormalCone}[1]{\mathcal{N}_{#1}} $$
$$ \newcommand{\norm}[1]{\left\|#1\right\|} $$
$$ \newcommand{\abs}[1]{\left|#1\right|} $$
$$ \newcommand{\func}[1]{\mathop{\rm #1}\nolimits} $$
$$ \newcommand{\DF}{\mathfrak{D\hspace{-0.2em}\scriptstyle{F}\,}} $$
$$ \newcommand{\Shape}{\boldsymbol{\Omega}} $$
$$ \newcommand{\twobody}[2]{\Xi_{#1}^{#2}} $$
$$ \newcommand{\inner}[2]{#1 \cdot #2} $$
$$ \newcommand{\wavysmile}{\raise{-0.2ex}{\smallsmile}} $$
$$ \newcommand{\wavyfrown}{\raise{ 0.2ex}{\smallfrown}} $$
$$ \newcommand{\wavy}{\wavysmile\!\wavyfrown\!\wavysmile\!\wavyfrown\!\wavysmile} $$
$$ \newcommand{\sampson}[1]{\hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(90deg) scale(0.4, 0.3) }{\wavy}
\hspace{-1.4em} #1 \hspace{-1.3em}
\style{display: inline-block; transform: translateY(-0.1em) rotate(-90deg) scale(0.4, 0.3)}{\wavy}
\hspace{-1.4em}} $$
$$ \newcommand{\fieldset}[2]{#1 \hspace{-0.65em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planesymbol}{~ \style{display: inline-block; transform: scale(2.24, 0.66)}{\diamond}} $$
$$ \newcommand{\df}[1]{\fieldset{#1}{\leftrightarrow}} $$
$$ \newcommand{\ugf}[1]{\fieldset{#1}{-}} $$
$$ \newcommand{\augf}[1]{\fieldset{#1}{\sim}} $$
$$ \newcommand{\plane}[1]{\fieldset{#1}{\planesymbol}} $$
$$ \newcommand{\fieldsetSm}[2]{#1 \hspace{-0.48em} \lower{0.8ex}{\scriptscriptstyle{#2}} \;} $$
$$ \newcommand{\planeSm}[1]{\fieldsetSm{#1}{\planesymbol}} $$
$$ \newcommand{\cupset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\capset}[2]{\; \mathop{#1 \hspace{-0.51em} \raise{0.24ex}{#2} \;}} $$
$$ \newcommand{\minmaxCup}{\;\vee\;} $$
$$ \newcommand{\minmaxCap}{\;\wedge\;} $$
$$ \newcommand{\arbitraryCup}{\cupset{\vee} {\scriptstyle{\raise{ 0.4ex}{*}}}} $$
$$ \newcommand{\arbitraryCap}{\capset{\wedge}{\scriptstyle{\raise{-0.4ex}{*}}}} $$
$$ \newcommand{\distanceCup}{\cupset{\cup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\distanceCap}{\capset{\cap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\euclideanSymbol}{\hspace{-0.02em}\raise{0.1ex}{\scriptscriptstyle{+}}} $$
$$ \newcommand{\euclideanCup}{\cupset{\cup}{\euclideanSymbol}} $$
$$ \newcommand{\euclideanCap}{\capset{\cap}{\euclideanSymbol}} $$
$$ \newcommand{\chamferMinmaxCup}{\;\sqcup\;} $$
$$ \newcommand{\chamferMinmaxCap}{\;\sqcap\;} $$
$$ \newcommand{\chamferDistanceCup}{\cupset{\sqcup}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferDistanceCap}{\capset{\sqcap}{\scriptstyle{\circ}}} $$
$$ \newcommand{\chamferEuclideanCup}{\cupset{\sqcup}{\euclideanSymbol}} $$
$$ \newcommand{\chamferEuclideanCap}{\capset{\sqcap}{\euclideanSymbol}} $$

*If the controls aren’t working on your device, click through to ShaderToy*

With `SDF`

enabled and `offset`

non-negative, the *boundary map* arrow always points to the boundary, via the distance intersection of two planes, \(\df{F} = \plane{A} \distanceCap \plane{B}\) . With `SDF`

disabled, the points in a region opposite of the vertex, the *normal cone* of the intersection, fail to point at the boundary. The normal cone, shown in green, is the set of points closest to the sharp intersection. Similarly, with `SDF`

enabled and negative `offset`

, the boundary map of points in the normal cone of the intersection trace out the classic *swallowtail* failure mode of offsetting chains of curves with fillets.

Let’s nail down some definitions to at least a SIGGRAPH level of rigor. For nuanced definitions, see Luo, Wang, and Lukens’s framing of SDFs using Variational Analysis.

Fields are functions mapping a smoothly curved space, usually \(\R^n\), to the affinely extended reals \(\overline\R \equiv \R \cup \{\pm\infty\}\). If you haven’t seen extended reals before, it turns out that you can do the hokey-pokey with analysis and simply define the ends of the real number line to be closed instead of open, even dividing (anything other than zero) by zero; feel free to resent your third grade teachers and find new brilliance in IEEE floating point representations.

Unit Gradient Fields (UGFs) are simply fields with unit gradient magnitude, everywhere the gradient exists. Although we usually use them to represent shapes, there is no need for them to have any non-positive values, as adding a constant to a UGF doesn’t change its gradient.

Distance fields (DFs) are defined by the difference of the unsigned distance to a set minus the unsigned distance to the set’s complement, noting that the distance to a set from its inside is zero. The piecewise definition is \(C^1\) continuous (where differentiable) across the set’s boundary. DFs are UGFs when defined by *proper* sets. (The two improper sets, the null set and the set of all points in a space, generate the distance fields \(+\infty\) and \(-\infty\), respectively.) When DFs have an interior, we will call them “signed” *SDFs*. (We avoid the term “UDF” due to its similarity to “UGF”.)

DFs contain more information about a shape than a UGF representing the same shape. For example, the sum of two DFs represents the local clearance between parts. These properties derive from the key fact about DFs: their boundary map always points to their boundary.

The boundary map, represented by the black arrow in the visualization, is simply the map from point \(\p\) to the closest point on the surface of a set represented by a DF \(\df{F}\):

\[\BoundaryMap{\df{F}}(\p) = \p - \df{F} \;\, \grad\df{F}\]Distance fields can be thought of as the magnitude of the boundary map vector fields, and any UGF that is a boundary map represents a distance field.

We can constuct the normal cone as illustrated above using the boundary map. Given two SDFs \(\df{A}(\p)\) and \(\df{B}(\p)\) the normal cone for \(\ugf{F} = \df{A} \distanceCap \df{B}\) becomes:

\[\NormalCone{\ugf{F}} = -\df{A}\left(\BoundaryMap{\df{B}}(\p)\right) \minmaxCap -\df{B}\left(\BoundaryMap{\df{A}}(\p)\right)\]Plane fields are a special case of SDFs to planar half-spaces with the special property that they are everywhere differentiable.

In this series, we’ll use a secondary notation to remind ourselves of the properties of fields:

Plane fields |
\(\plane{P}\) |

Distance field |
\(\df{D}\) |

Unit gradient field |
\(\ugf{U}\) |

Unit gradient field at zero |
\(\augf{A}\) |

The latter, we’ll refer to as *approximate UGFs* (AUGFs). Any field with non-vanishing gradient can be converted to an AUGF via *Sampson normalization* (Sampson 1982):

We will often generalize properties of planar intersections to behavior near the zero isosurface of AUGFs.

So far, we’ve seen minmax, distance-based, and, in the last post, chamfered Booleans that preserve UGFness. There are also many useful fast and reliable Boolean operations that produce results that are not UGFs.

*(Direct link to Shadertoy if preview failing.)*

We’re going to need some notation to keep the different flavors of Booleans straight. Let’s focus on the Union or \(\min\) operation, as the intersection can be defined as the complement of the union of the complement of the inputs:

\[\max(A, B) = -\min(-A, -B) \;.\]First, nodding to Rvachev and logic functions, we can define the minmax Booleans \(\minmaxCup\) and \(\minmaxCap\) using \(\min\) and \(\max\). Similarly, we can define the DF-preserving Booleans, \(\distanceCup\) and \(\distanceCap\), which are defined piecewise across the boundary of the normal cone. Outside of the normal cone, the distance result is the same as the minimax Booleans, but inside it sees the distance to the curve of intersection.

It’s worth comparing the DF blend to common implicits blends in the graphics community. Korndörfer gives perhaps the most elegant, in which the entire remote quadrant of the intersection receives the blend instead of the normal cone, a subset of it:

\[\ugf{A} \euclideanCup \ugf{B} \equiv \max\!\left(\ugf{A} \minmaxCup \ugf{B}, 0 \right) \;-\; \norm{\left(\min(\ugf{A}, 0),\strut\min(\ugf{B}, 0) \right)} \;,\]where \(\norm{\cdot}\) is the Euclidean norm of a vector of fields. We’ll use variants of the traditional union and intersections symbols for blended or rounded intersections.

Quilez provide several examples of “smooth minimum functions” that blend the entire discontinuity typically produced by \(\min\). With constant blend radius, they do not repeat the logic of \(\min\) and \(\max\), but by using an estimate of distance-to-curve for their intersection, we can produce a logic-preserving minimum. This radius variation works on Quilez’ polynomial and exponential \(\func{smin}\):

\[\func{smin}\left(\ugf{A}\,, \ugf{B}\,, \abs{\ugf{A} \, \ugf{B } \; (1 - \grad{\ugf{A}} \,\cdot \grad{\ugf{B}})}\right) \;.\]The sum and difference of fields and the distance to intersections curves will be further explored in future posts.

Rvachev, as popularized by Shapiro first identified and classified the concept of logic-preserving implicit functions, named *R-functions* after him. In this example, we’re showing \(\vee_0\) in Rvachev’s notation:

Note that \(\vee_0\) is an AUGF, despite its remote field departing quickly from unit magnitude.

For most applications not requiring UGFs, the Euclidean blend works well, so we won’t continue with Quilez or Rvachev blends in this series. We will get to chamfers in a future post (which use squared notation due to the extra edge), so let’s define a common set of notation for the family of R-function Booleans available in edge treatments.

Minmax |
\(\minmaxCup\) | \(\minmaxCap\) |

Distance-preserving Boolean |
\(\distanceCup\) | \(\distanceCap\) |

Euclidean blend (Korndörfer) |
\(\euclideanCup\) | \(\euclideanCap\) |

Chamfer (minmax intersections) |
\(\chamferMinmaxCup\) | \(\chamferMinmaxCap\) |

Chamfer (distance-preserving) |
\(\chamferDistanceCup\) | \(\chamferDistanceCap\) |

Chamfer (Euclidean blend) |
\(\chamferEuclideanCup\) | \(\chamferEuclideanCap\) |

Arbitrary (any of the above) |
\(\arbitraryCup\) | \(\arbitraryCap\) |

As a preview to future posts on edge treatments, see these two social media threads:

*While I’m a fan of John Nash’s work, this portrayal never landed for me. That said, I did question my sanity while working on the diagram below.*

Given a few different grades of fields and a set of operators, one might wonder if there’s any structure worth noting. For example, the distance-preserving Boolean maps DFs to DFs, while the minmax boolean map DFs to UGFs. Here’s my attempt to document the structure of the system, with a few operations to be defined in later posts:

DFs and UGFs to a shape can only differ in shapes’ normal cones that arise on non-smooth boundaries. In this post and the last, we focused on these sharp (edge-like) regions and offsets to help clarify that all fields with unit gradient magnitude are not DFs. There’s more fun to be had with edges and edge treatments, but perhaps in the next posts we’ll visit some of the tricks that work only with UGFs and some techniques for creating UGFs to new shapes.

Please keep the feedback coming!

]]>Take these three examples of an offset rectangle, created using three different “line joining” approaches that date back to the early days of 2D graphics and are built into your browser:

The red rectangle on the left uses extensions of the rectangle’s edges. This approach is similar to what we expect from B-rep and most mesh modelers, where extra faces are only added when needed. One might not notice that the offset vertices are actually a factor of \(\sqrt{2}\) farther from the vertex than the sides. In engineering applications, we often prefer the topological simplicity of such *naturally extended* intersections to geometric correctness.

In the middle, the green rectangle offsets geometrically, with a rounded corner the exact Euclidean distance to the vertex. When modeling with SDFs, we expect these geometric offsets, but the result may surprise engineers who prefer the simplicity of naturally extended corners.

As a third example, although not normally considered a common option during offset in engineering applications, offset can produce a chamfered result. Are there other possibly legitimate options for how one might want to treat a corner?

When modeling with fields, we often “offset” a field \(F\) as an alternative to offsetting the boundary of the shape \(\shape{F}\) it represents.

(In our notation, one can convert a shape to a distance field via \(\df{F} = \DF \shape{F}\) or extract a shape from the non-positive region of a field via \(\shape{F} = \Shape F\). We also annotate planes \(\plane{P}\), distance fields \(\df{D}\), and unit gradient fields \(\ugf{G}\) to distinguish them from general fields \(F\). The next post will cover these topics.)

We define the *offset of the field* \(F\) *by constant distance* \(\lambda\):

so the distance the zero isosurface moves depends on the gradient of \(F\). One can think of the offset behavior as baked into the field’s geometry itself, not something one can do to the geometry, as with boundary modeling. If we’d like different edge treatments on different edges, somehow we need to produce a fields with the proper behavior encoded in them ahead of time.

For example:

Clearly, only the top right corner represents an SDF. In this example, the gradient, where defined, always has unit magnitude, as observed by the 1:1 slope in the field’s *epigraph*, \(\planeSm{z} - F(\planeSm{x}, \planeSm{y})\):

A similar situation presents itself in 3D. How the edges of the cube propagate when this spherecube is rounded is predetermined by the field surrounding the cube, yet it’s not visible on the nominal geometry:

The left option is most common with B-rep modelers when rounding an edge, and some provide the option to round the blend, as seen in the center. UGF modeling, however, provides a unique ability to add more control, as expressed with the chamfered alternative on the right.

UGFs, fields with unit gradient magnitude (where the gradient is defined), offer a generalization over SDFs with the appropriate amount of flexibility for many engineering applications. They also overcome the greatest weakness of SDFs: closure.

The offset of a distance field is not, in general, a distance field. Starting with a SDF, if one offsets a convex edge inward or a concave edge outward, the result is a field that no longer represents the distance to the isosurface. (We’ll dedicate a post on this topic soon, but if you wanted an exercise, this would be it.) Similarly, as observed by Inigo Quilez, Booleans, as produced by \(\min\) and \(\max\), do not produce distances. Most other common operations, including blending, smoothing, interpolating, variable offsetting, warping, and even scaling all can introduce artifacts that cause subsequent operations to behave unpredictably. Finally, we can also construct UGFs from other UGFs in ways that are not possible with SDFs.

Over the past year or so, I’ve been gathering my implicit modeling practices into a manuscript unified by organizing principle of UGFs and how they relate to SDFs and their relatives. Now that the book has achieved critical mass, I thought I’d start to introduce UGFs thought a blog series. With the release of nTop 4.0 and its new spherecube and UGF themed logo, as well as in anticipation of conversations at CDFAM 23*, it’s time to start talking about the project!

* Note: while the nTop logo, as most of the images on this page, was designed in nTop, the CDFAM logo appears to have been generated using machine learning.

Let’s take a look at a less theoretical example, where we want to control different edges with different edge treatments. Here, we’re also using UGFs to produce drafted faces and a lip feature, as common in molding applications.

Each Boolean operation offers an opportunity to choose what kind of edge treatment is appropriate, establishing downstream design intent. When modeling with UGFs, it is natural to encode such design intent sooner than with explicit modeling, enabling downstream operations to behave predictably and be automated.

What makes UGFs special? Couldn’t we always do these tricks? Yes, but without a resolute focus on maintaining unit gradient magnitude, the results would not produce the constant offsets, circular rounds, and predictable results we associate with engineering software.

So stay tuned through June as I walk you through more explanation of the above and through diverse application such as:

- How to use UGFs to condition fields for downstream use when producing lattices, edge treatments, and thin-walled geometry.
- Parameterizations possible from UGFs, such as the two-body field and the orthogonal sum and difference fields.
- Everything you want to know about implicit edge treatments, including rolling-ball blend and chamfer.
- Distance-preserving ramp and other transformations.
- Curve-driven and isocline draft.

And any discussion topics that I hope come up in conversations along the way!

So to summarize: UGFs are like SDFs but are more useful, enabling a more expressive language for implicit modeling in engineering and closure in modeling with distance fields. By focusing on UGFs instead of SDFs, we can put implicits to work in more engineering applications.

*Closing image: multiscale, spatially-varying FDM/FFF infill with constant weld offsets:*

As the cost of fabricating high-fidelity, quantum-collapsed geometry becomes increasingly prevalent, as an industry, we’re forced to confront the following challenge: where are we going to put all of our crap? At QE3D, we’re hard at work to collapse wave functions in our commitment to enable more engineering design space.

You may wonder: how does the superposition of quantum entanglement and machine learning scintillate more room for our everyday carry? Indeed, our technology achieves for consumer products, implantable electronics, wearable devices, and sub-dermal surveillance exactly the same advantages parachute pants achieved for break dancers. With the supremacy of the mesoscale fully realized via TPMS, spinodal decomposition, and mixed topology lattices, from what extra space might we draw additional engineering acumen?

At QE3D, we manifest our quantum technology through three regimes for AI-driven, dimensionality enhanced, spatial domains.

Daniel Piker introduced me to the concept of transforming otherwise Euclidean three-dimensional objects via stereographic projection onto the Riemann hypersphere. Astonished by the myopia of the three.js community, I launched four.js, which has mostly enjoyed success in the dimension orthogonal to this reality.

Throwing differential geometry out the window, we apply variational analysis to fractal boundaries with fractional Hausdorff dimension, creating a countable class of nooks and crannies. We can apply Stokes equations using Monte Carlo techniques, which speak the language of quantum. This approach is particularly useful when combined with topology optimization, as realized in this lovely tufted furniture collection by EvilRyu

Recall that in a Riemannian manifold, which in infinitely differentiable, the curvature at any point maybe positive negative, or hyperbolic. As we increase in dimension, the hyperbolic case becomes more prevalent. This trend offers the opportunity to put that extra space to work! On a hyperbolic point of a surface, for example, a circle drawn around that point has a larger circumference that the same radius circle on a flat (Euclidean) surface, which is in turn larger than a circle of the same radius in spherical space. That larger circumference gives us more space to place objects, and the same is even more true as you increase dimension.

For example, consider the circles on this hyperbolic surface from Keenan Crane’s course on discrete differential geometry.

In any number of dimensions, we can use the sum and difference fields of two distance fields \(\df{A}\) and \(\df{B}\):

\[\df{A}+\df{B} \;,\]and:

\[\df{A}-\df{B} \;,\]to create conformal maps between any two sets represented by \(\df{A}\) and \(\df{B}\), such as the conformal map between this square and circle:

If you’ve been working in boring old 2D or 3D CAD in Euclidean spaces, you’re probably leaving a lot of performance on the table. At QE3D, our software and slicing stack deconvolves bloated designs directly into high-dimensional, fractional-dimensional, and hyperbolically curved finite matter arrays. Join our wait list today!

]]>As mentioned yesterday, hyperbolic space is more spatially dense than Euclidean space, and therefore offers opportunities for higher performance and fidelity in engineering applications. In this case study, we’ll examine how to prepare ordinary Euclidean CAD and mesh geometry for embedding in hyperbolic space and manufacture in QE3D’s quantum entanglement production system.

The key unit in any structural design, including beam lattices, is a triangle. In Euclidean space, the sum of the angles of set of triangles around a vertex must total 360°. In hyperbolic space, we can increase that total angle to any number we want, even ∞!

In CAD, that means taking a ordinary triangular structure and deforming it into a triangle a concave arc for at least once side. For a uniform tessellation like this example, we need that angle to divide 360°. The unit cell below will be reflected, so we convert it from 45° to 36° so we can fit five instead of four around a vertex.

As a Euclidean engineer, you might be concerned that our structural elements are bent, and therefore may buckle under compression. Indeed, such freedom for failure exists in Euclidean space. However, recall that in conformal models of hyperbolic geometry, lines and arcs are unified as *circlines*, where lines are simply circles with infinite radius, and therefore also include the point at infinity, which we will gleefully add to our domain. (A side benefit of keeping infinity around is that we may divide by zero via Alexandrov compactification.)

Recall also that in our hyperbolic model, there are an infinite number of lines parallel to a given line at a point. Consider a force on a point in a hyperbolic plane: an opposing force may be provided in any line parallel to the original force. The implication is that any beam, ever curved ones, are completely rigid. For example, this Poincaré disc, made from our unit cell above, is completely rigid and undeformable:

This famous stiffness of conformal geometry creates a strong connection between any structure’s tessellation and the shape into which it’s formed, as guaranteed by the Riemann mapping theorem and realized through Schawrz-Christoffel mappings. For example, this filter for a harmonic compressor was fabricated by one of our quantum supremacy pick-and-place machines:

Any genus topology will do. for example, we can remap the disc into a strip:

To be clear: to make linearized models like this strip, you don’t need fancy manufacturing such as what we develop at QE3D. Conventional progressive die and roll forming technology may suffice, such as these die rollers for an Imipolex G line:

We can continue remapping, for example, transforming the strip into an annulus:

In 2006, Grigori Perelman proved that we can extend this approach to any number of dimensions. For example this innocuous-appearing tiara includes the Luneburg lens for a 5G transcoding antenna.

With hyperbolic embedding, any amount of structure may be packed in to any volume of hyperbolic space, if sufficiently curved. Similarly, complex engineering geometry, such as the traces of circuit boards, vasculature, and sheet metal may also be packed into such spaces. The dawn of entanglement fabrication from QE3D motivates the need for a new generation of hyperbolic engineering software that anticipates the challenges of the quantum manufacturing industry.

Sign up to learn more, or try our embedded beta:

For a long time I wanted to know math. I thought that I could learn what was out there but by glossing over the text and seeing the ideas, maintaining some sort of mental index to the math I might someday need to use. I would assume that the author’s introductory instructions to do the exercises didn’t apply to me. I usually only made it a few chapters into such texts.

What changed it for me was Tristan Needham’s *Visual Complex Analysis*. I was right with him through the chapter on non-Euclidean geometry, at which point I was inspired to write a hyperbolic geometry kernel as an art project. (There’s more room to put stuff in hyperbolic space, so there are obvious engineering applications e.g. to make smaller mobile phones.)

In the process of representing circlines (circles that can have infinite radius) in hyperbolic space, I started with simple constructions, but the approach seemed clumsy and inelegant. Empowered by the text, I was able to just absorb enough of some notes by Casselman to directly transform the circlines by conjugating with Möbius transformations, all in matrix representation. Although I had previously struggled with the abstractions of group theory, I could now appreciate this deeper pattern in algebra.

The first app based off the kernel with a Poincaré disc viewer hooked up to a joystick that I took to lowbrow art events, Burning Man, etc. People loved not only the art, but also learning about the extra motions of hyperbolic space. The desktop version is here. I later made a web version that works differently. It needs a new backend for image upload and has some other glitches, but here.

Through these projects, I finally realized what the authors meant about math being best experienced, instead of just known. This journey empowered me to pursue new approaches in implicit modeling where, despite continuing my math education, I appear to have a list of questions that grows longer every time I try to write down and document something that I think I know. Perhaps doing the exercises in those textbooks would have provided similar experiences.

In computational geometry today, there appears to be a trend of putting cool number fields on explicit geometry and solving for lovely and useful properties, mostly on the surface. Cool stuff, but math-heavy. There are many other techniques that have created pockets of interest in the past, such as level sets, subdivision surfaces, and even solid modeling itself, all of which seem to have mathematical barriers to entry. Academic circles often appear to reinforce variations on particularly beautiful or useful constructs like these, but you have to work to get there. If you are pursuing an academic career, you likely have the support of your institution to ramp up into such territory, but I have found it difficult to navigate independently. Fortunately, the community appears to offer plenty of on-ramps, such as Alec Jacobson’s and Keenan Crane’s.

If there is any one piece of advice I can offer, it is to be intentional with your curiosity. If you find yourself becoming interested in a problem, enable yourself to pursue it with reasonable boundaries. Stop and smell the flowers when they appear. Don’t worry about where things will lead. Also, don’t confuse commercial activity with art or educational experiments. In the former, the customer’s voice and deployment should dominate priorities, but in the latter you have complete creative authority, and time is on your side.

Meanwhile, I’m thrilled to be working through Needham’s visual approach to differential geometry, and look forward to someday finishing *Visual Complex Analysis*.

When she finished, we went into nTopology and plotted the sum field, reflecting around the origin. Looking at it this way, we can think of the sum as adding the distances to two infinite lines, as defined by the values of our rows and columns. We call this sum field a kind of *norm field*.

On the other hand, we can place a pawn on our zero and count as it walks down and right in the table. Somehow, the table always knows exactly how many steps the pawn takes. We can interpret this board as a *distance field* to the origin.

What happens if we repeat the same process with subtraction? Notice how the sum and difference contour lines and slopes are perpendicular at every point.

In nTopology, we can also construct *epigraphs* of our sum and difference fields:

Let’s fabricate those epigraphs from magnetic tiles, relating positive and negative Gaussian curvature to the convexity and concavity of curves. Positive curvature comes from the sum field, and negative curvature from the subtraction field. Ball-like ellipses and horse-saddle hyperbolas are easily found on the surface geometry couch, especially if it has a cover like ours.

For the next discussion, consider also assembling four squares. We can compare those four squares to the four equilateral triangles on the spherical model and the four diamonds (double equilateral triangles) in the hyperbolic model. What happens when we take the pieces around each center apart and look at them flatted?

(It might help to arrange the pieces next to each other, if your student doesn’t do it first.)

X quickly observed that the excess angle on the hyperbolic side completed the elliptical side, so they were on average flat.

To complement addition, here’s the subtraction image from nTop, in case you want to print it:

nTopology, nTop, nTop Platform, Field-Driven Design, and Element are trademarks of nTopology Inc.

]]>