Prosecution Insights
Last updated: April 19, 2026
Application No. 17/747,953

GENERATING PRISMATIC CAD MODELS BY MACHINE LEARNING

Final Rejection §102§103
Filed
May 18, 2022
Examiner
MORRIS, JOSEPH PATRICK
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Autodesk, Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
34 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-25 are presented for examination. This Office Action is in response to submission of documents on December 18, 2025. Rejection of claims 1-25 under 35 U.S.C. 101 for being directed to unpatentable subject matter is withdrawn. Rejection of claims 14 and 18 under 35 U.S.C. 112(b) as being indefinite is withdrawn. Rejection of claims 1, 7, 17-19, and 23 under 35 U.S.C. 102(a)(1) as being anticipated by Kania is withdrawn. Rejection of claims 2, 20, and 24 under 35 U.S.C. 103 as being obvious over Kania in view of Clark is withdrawn. Rejection of claim 3 under 35 U.S.C. 103 as being obvious over Kania in view of Clark and Park is withdrawn. Rejection of claims 4, 6 and 21 under 35 U.S.C. 103 as being obvious over Kania in view of Park and Jayaraman is withdrawn. Rejection of claim 5 under 35 U.S.C. 103 as being obvious over Kania in view of Park, Jayaraman, and Chang is withdrawn. New rejection of claims 1, 7, 17-19, and 23 under 35 U.S.C. 10103 as being obvious over Kania in view of Yang. New rejection of claims 2, 20, and 24 under 35 U.S.C. 103 as being obvious over Kania in view of Yang and Clark. New rejection of claim 3 under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Clark and Park. New rejection of claims 4, 6 and 21 under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Park and Jayaraman. New rejection of claim 5 under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Park, Jayaraman, and Chang. Claims 8-16, 22, and 25 are objected to as being dependent upon a rejected base claim. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding the rejection of claims 14 and 18 under 35 U.S.C. 112(b), Examiner agrees that the amendments overcome the rejection. Accordingly, the rejection has been withdrawn. Regarding the rejection of claims 1-25 as being directed to unpatentable subject matter, Examiner agrees that at least “searching, in the database of sketch models, using the input embedding as a query embedding to find the 2D parametric sketch model having an embedding that satisfies a distance criterion to the query embedding in the embedding space of the 2D autoencoder” cannot be performed in a human mind nor recites mathematical concepts that would render the limitation a judicial exception. Thus, the newly added limitation is an additional element that, when viewed as part of the claim as a whole, amounts to significantly more than the recited abstract ideas. Accordingly, the rejection of claims 1-25 under 35 U.S.C. 101 has been withdrawn. Regarding rejection of independent claims 1, 19, and 23 under 35 U.S.C. 102(a)(1), Examiner agrees that the claims, as currently amended, are not taught nor disclosed by Kania. Accordingly, the rejection has been withdrawn. However, in light of additional searching and consideration, the claims are newly rejected under 35 U.S.C. 103 as being obvious over Kania in view of Yang, et al., (U.S. Pat. Pub. No. 2021/0117648). Other rejected claims are rejected for depending from one of claims 1, 19, or 23. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7, 17-19, and 23 are rejected under 35 U.S.C. 103 as being obvious by Kania, et al. (“UCSG-NET - Unsupervised Discovering of Constructive Solid Geometry Tree,” hereinafter “Kania”) in view of Yang, et al., (U.S. Pat. Pub. No. 2021/0117648, hereinafter “Yang”). Claim 1 Kania discloses a method comprising: obtaining an input embedding that encodes a representation of a target two- dimensional (2D) shape; Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Kania at pg. 2. The “latent vector” is analogous to an encoded “input embedding.” processing the input embedding using a 2D decoder of a 2D autoencoder to obtain a decoded representation of the target 2D shape, DeepSDF and DualSDF use a variational autodecoder approach to generate shapes. Kania at pg. 6. wherein the 2D autoencoder comprises a 2D encoder that processes a representation of a 2D object to generate an object embedding, and the 2D decoder that processes the object embedding to generate the decoded representation of the 2D object; Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Kania at pg. 2. DeepSDF and DualSDF use a variational autodecoder approach to generate shapes. Kania at pg. 6. determining a fitted 2D parametric sketch model for the input embedding, comprising: finding a 2D parametric sketch model for the input embedding using a search in an embedding space of the 2D autoencoder and a database of sketch models associated with the 2D autoencoder, wherein a shape of the 2D parametric sketch model is determined by one or more parameter values of the 2D parametric sketch model, All used primitives are represented as signed distance fields D. It means, that instead of having a discretized mesh, we evaluate distance of any point x to the surface of the object. Such a formulation provides continuous representation of an object. Kania at pg. 12. A “primitive” is analogous to a “sketch model.” fitting the 2D parametric sketch model to the decoded representation of the target 2D shape by modifying the one or more parameter values of the 2D parametric sketch model to produce the fitted 2D parametric sketch model; and Primitive parameter prediction network The role of this component is to extract the parameters of the primitives, given the latent representation of the input object. Kania at pg. 3. using the fitted 2D parametric sketch model in a computer modeling program. Our method is the first one that is able to predict CSG tree without any supervision and achieve state-of-the-art results on the 2D reconstruction task comparing to CSG-NET trained in a supervised manner. Predictions of our method are fully interpretable and can aid in CAD applications. Kania at pg. 2. Kania does not appear to disclose: wherein the finding comprises: searching, in the database of sketch models, using the input embedding as a query embedding to find the 2D parametric sketch model having an embedding that satisfies a distance criterion to the query embedding in the embedding space of the 2D autoencoder; Yang, which is analogous art, discloses: wherein the finding comprises: searching, in the database of sketch models, The data 210 includes 3D model data 212, descriptor database 214, geometric-description vector data 216, and topological-description vector data 218. In an example implementation, the data 210 may reside in the memory 204. Further, in some examples, the data 210 may be stored in an external database, but accessible to the processor 202 of the system 200. Yang at [0022]. using the input embedding as a query embedding [A]t block 408, the skeleton view is processed through the second trained CNN to determine a second shape-description vector (SDV2). At block 410. Yang at [0050]. to find the 2D parametric sketch model having an embedding that satisfies a distance criterion to the query embedding in the embedding space of the 2D autoencoder; After obtaining the cSDV, the query engine 208 obtains an FDV from the descriptor database 214 based on Euclid distance D between the cSDV and each of the FDVs stored in the descriptor database 214. Yang at [0036]. Yang is analogous art to the claimed invention because both are related to identifying 3D models of objects that are stored in a database. It would have been obvious to a person having skill in the art, before the effective filing date of the claimed invention, to utilize the database disclosed in Yang to utilize an embedding, analogous to a vector, to search a database of models to identify a model that matches the embedded object. Motivation to combine includes improved accuracy in the identification. See, e.g., Yang at [0010], (“The approaches of the present subject matter enable identification of 3D models from a database with enhanced accuracy.”). Claim 7 Kania discloses: wherein the one or more intermediate 2D decoders include a second 2D decoder that is different from the 2D decoder of the 2D autoencoder. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. Kania at pg. 3. Each of the “primitive parameter network” components are analogous to an “intermediate 2D decoder,” which are different from the “decoder” (See Figure 1). Claim 17 Kania discloses: wherein the using the fitted 2D parametric sketch model comprises: displaying the fitted 2D parametric sketch model in a user interface of the computer modeling program. Hence, for a situation when a 3D object was modeled with a sculpting tool, the model can approximate it with single primitives and operations between them. Then, such a reconstruction can be integrated into existing CAD models. We find that beneficial in speeding up the prototyping process in 3D modeling. However, inexperienced CAD software users can rely heavily on presented assumptions. In the era of 3D printing ubiquity, printed elements out of reconstructed CSG parse trees can be erroneous, thus breaking the whole item. Therefore, we note that integrating our method into existing software should serve mainly as a prototyping device. Kania at pg. 9. Claim 18 Kania discloses: obtaining an input 2D image, wherein the input 2D image includes two or more 2D shapes; We present qualitative evaluation results in Figure 4 and visualize used shapes for the reconstruction. The UCSG-NET uses proper operations at each level that lead to the correct shape reconstruction. In most cases, it puts rectangles only. The nature of the dataset causes that phenomenon. To avoid possible errors, the network often uses a union of overlapping shapes to pass the primitive untouched. Kania at pg. 7. generating sub-image portions from the input 2D image, wherein each sub-image portion depicts a 2D shape of the two or more 2D shapes; See Figure 4, wherein the “CSG tree” includes the “primitives,” analogous to “sub-image portions.” PNG media_image1.png 401 897 media_image1.png Greyscale generating a respective sub-image portion embedding for each sub-image portion of the sub-image portions; The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. Kania at pg. 7. See also Figure 1, wherein the Encoder encodes the original image into a vector “z” and then a decoder processes the vector and generates primitives: PNG media_image2.png 532 1373 media_image2.png Greyscale determining fitted 2D parametric sketch models, comprising: performing the determining each fitted 2D parametric sketch model for each sub-image portion embedding; and The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. Kania at pg. 7. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. Kania at pg. 3. generating a combined 2D parametric sketch model by combining the fitted 2D parametric sketch models at respective locations of the sub-image portions. See the “Reconstruction” images of Figure 4, which are comprised of sub-images illustrating combined “primitives” into a “fitted 2D parametric sketch model.” Claim 19 Claims 19 recites: a non-transitory storage medium having instructions of a computer aided design program stored thereon; and Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Kania at pg. 1. one or more data processing apparatus configured to run the instructions of the computer aided design program to perform operations specified by the instructions of the computer aided design program Training takes about two days on Nvidia Titan RTX GPU. Kania at pg. 8. Claim 19 further recites a method that is substantially the same as the method disclosed in claim 1. Accordingly, for at least the same reasons and based on the same prior art as claim 1, claim 19 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kania. Claim 23 Claim 23 recites: A non-transitory computer-readable medium encoding instructions operable to cause data processing apparatus to perform operations Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Kania at pg. 1. Claim 23 further recites a method that is substantially the same as the method disclosed in claim 1. Accordingly, for at least the same reasons and based on the same prior art as claim 1, claim 19 is rejected under 35 U.S.C. 103 as being anticipated by Kania in view of Yang. Claims 2, 20, and 24 are rejected under 35 U.S.C. 103 as being obvious over Kania in view of Yang and Clark, et al. (U.S. Patent No. 11,494,695, hereinafter “Clark”). Claim 2 Kania discloses: obtaining parameterized instantiations of 2D parametric sketch models; All used primitives are represented as signed distance fields D. It means, that instead of having a discretized mesh, we evaluate distance of any point x to the surface of the object. Such a formulation provides continuous representation of an object. Kania at pg. 12. A “primitive” is analogous to a “sketch model.” generating 2D training images from the parameterized instantiations of the 2D parametric sketch models, wherein each of the 2D training images corresponds to a parameterized instantiation of a 2D parametric sketch model; and The goal is to find compositions of primitives that minimize the reconstruction error. We employ mean squared error of predicted occupancy values Oˆ(L) with the ground truth O∗ . Values are calculated for X which combines points sampled from the surface of the ground truth, and randomly sampled inside a unit cube (or square for 2D case)… Kania at pg. 5. The “primitives” (analogous to “parametric sketch models”) are utilized to generate training data for the encoder/decoder. Kania and Yang do not appear to disclose: training the 2D autoencoder on the 2D training images, comprising: for each of the 2D training images: processing the 2D training image using the 2D encoder to generate an embedding; and processing the embedding using the 2D decoder to generate a decoded 2D image; computing a value of a loss function by comparing each of the 2D training images with its corresponding decoded 2D image; and updating parameters of the 2D encoder and parameters of the 2D decoder based on the value of the loss function. Clark, which is analogous art, discloses: training the 2D autoencoder on the 2D training images, comprising: for each of the 2D training images: processing the 2D training image using the 2D encoder to generate an embedding; and The encoder-decoder system 100 is configured to receive an input 102 and encode the input 102 into an embedding 106 that is in a lower dimensional space relative to the input 102. Clark at col. 3, lines 57-60. For example, if the task is an image autoencoding task, the input 102 is structured data (e.g., an array) representing one image, and the output 112 generated by the engine 108 is data representing the reconstructed image. Clark at col. 4, lines 14-17. processing the embedding using the 2D decoder to generate a decoded 2D image; The system 100 then decodes the embedding 106 to an output 112. Clark at col. 3, lines 65-66. computing a value of a loss function by comparing each of the 2D training images with its corresponding decoded 2D image; and The system computes an objective function (308) with respect to the generated training output. An objective function evaluates the quality of the generated training output, i.e., by measuring an error between the generated training output and the target output. In general, the system uses an objective function that is well-suited to the machine learning task the neural network is being trained to perform. For example, the L2 loss function which computes a least square error between the two outputs, is a common choice of objective functions in regression machine learning tasks that involve, e.g., images and speech data. Clark at col. 8, lines 55-65. updating parameters of the 2D encoder and parameters of the 2D decoder based on the value of the loss function. The system updates, e.g., based on the backpropagated gradient, respective parameter values of the decoder replica and the corresponding portion of the encoder (314). In particular, the system can use any appropriate machine learning training techniques to update the parameter value. Examples of training techniques include stochastic gradient descent, Adam, and rms-prop. Clark at col. 9, lines 21-26. Clark is analogous art to the claimed invention because both are directed to training an encoder/decoder to generate embeddings of images. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the training of Clark with the autoencoders and decoders of Kania to result in a trained encoder and decoder that converts an image to an embedding. Motivation to combine includes improving the quality and performance of an encoder by measuring the loss of the encoding/decoding process and adjusting the encoder and decoder accordingly. Claim 20 Claim 20 recites a system that performs a method that is substantially the same as the method disclosed in claim 2. Accordingly, for at least the same reasons and based on the same prior art as claim 2, claim 20 is rejected under 35 U.S.C. 103 as being obvious over Kania in view of Clark. Claims 24 Claim 24 recites a non-transitory storage medium having instructions to perform a method that is substantially the same as the method disclosed in claim 2. Accordingly, for at least the same reasons and based on the same prior art as claim 2, claim 24 is rejected under 35 U.S.C. 103 as being obvious over Kania in view of Clark. Claim 3 is rejected under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Clark and Park, et al. (“DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation,” hereinafter “Park”). Claim 3 Kania and Yang do not appear to disclose: training the 2D autoencoder on the 2D training images comprises: generating a signed distance field image from the 2D training image; and processing the signed distance field image using the 2D encoder to generate the embedding. Clark discloses: processing the For example, if the task is an image autoencoding task, the input 102 is structured data (e.g., an array) representing one image, and the output 112 generated by the engine 108 is data representing the reconstructed image. Clark at col. 4, lines 14-17. The encoder of Clark takes, as input, an array representing an object. The array can be, for example, a signed distance field, as disclosed by Park (see below). Kania and Clark do not appear to disclose: training the 2D autoencoder on the 2D training images comprises: generating a signed distance field image from the 2D training image; and Park, which is analogous art, discloses: training the 2D autoencoder on the 2D training images comprises: generating a signed distance field image from the 2D training image; and Our DeepSDF representation applied to the Stanford Bunny: (a) depiction of the underlying implicit surface SDF = 0 trained on sampled points inside SDF < 0 and outside SDF > 0 the surface, (b) 2D cross-section of the signed distance field, (c) rendered 3D surface recovered from SDF = 0. Note that (b) and (c) are recovered via DeepSDF. Park at pg. 2, Figure 2 description. PNG media_image3.png 213 327 media_image3.png Greyscale Park is analogous art to the claimed invention because both are directed to representing an image as a signed distance field. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine Clark with the other references to result in a system that trains an autoencoder with images represented by signed distance fields. Motivation to combine includes both references explicitly indicating that DeepSDF representation can be utilized to perform the task, thus using the training of Park with the training process of Clark would be obvious to try based on Park contemplating utilizing its process in the context of Clark. Claims 4, 6, and 21 are rejected under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Park, and Jayaraman, et al., (“SolidGen: An Autoregressive Model for Direct B-rep Synthesis,” hereinafter “Jayaraman”). Claim 4 Kania discloses: processing the initial input embedding using a sub-embedding decoder of to obtain sub-embeddings including the input embedding, wherein the sub- embeddings encode 2D shapes that define the representation of the target 3D object; The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. Kania at pg. 7. See also Figure 1, wherein the Encoder encodes the original image into a vector “z” and then a decoder processes the vector and generates primitives: PNG media_image2.png 532 1373 media_image2.png Greyscale generating parametric sketch models, comprising: processing each of the sub-embeddings using one or more intermediate 2D decoders to obtain the 2D shapes that define the representation of the target The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. Kania at pg. 7. See also Figure 1, wherein the decoder processes the vector into primitives. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. Kania at pg. 3. Each of the “primitive parameter network” components are analogous to an “intermediate 2D decoder.” generating each of intermediate embeddings by processing each of the 2D shapes using the 2D encoder of the 2D autoencoder; and Then, we employ GRU unit [11] that takes the latent code z (l) and encoded Vˆ (l) as an input, and outputs the updated latent code z (l+1) for the next layer. Kania at pg. 4. z (l+1) is analogous to the intermediate embeddings, which is processed utilizing the output shapes (see Figure 1). performing the determining a respective parametric sketch model of the parametric sketch models for each of the intermediate embeddings, wherein the respective parametric sketch model is the fitted 2D parametric sketch model, wherein the decoded representation of the target 2D shape is each of the 2D shapes; Operations can be repeated to output multiple shapes. Note that the computation overhead increases linearly with the number of output shapes per layer. The whole procedure can be stacked in l ≤ L layers to create a CSG network. The L-th layer outputs a union since it is guaranteed to return a non-empty shape in most cases. Kania at pg. 4. Park discloses: obtaining an initial input embedding that encodes a representation of a target three- dimensional (3D) object; Instead, we want a model that can represent a wide variety of shapes, discover their common properties, and embed them in a low dimensional latent space. To this end, we introduce a latent vector z, which can be thought of as encoding the desired shape, as a second input to the neural network as depicted in Fig. 3b. Conceptually, we map this latent vector to a 3D shape represented by a continuous SDF. Park at pg. 4. a 3D autoencoder Auto-encoder outputs are expected to replicate the original input given the constraint of an information bottleneck between the encoder and decoder. The ability of auto-encoders as a feature learning tool has been evidenced by the vast variety of 3D shape learning works in the literature [16, 49, 2, 22, 55] who adopt auto-encoders for representation learning. Park at pg. 3. Kania, Yang, and Park do not appear to disclose: generating a set of extrusion parameters from the sub-embeddings; and generating a 3D boundary representation (B-Rep) model of the target 3D object, wherein the generating comprises using the fitted 2D parametric sketch models in a construction sequence to construct the 3D B-Rep model through extrusion into a 3D space, wherein the construction sequence comprises the set of extrusion parameters. Jayaraman, which is analogous art, discloses: The Parametric Variations (PVar) Dataset is synthetically designed for testing SolidGen on the class-conditional generation task, since categorically labeled B-rep datasets are unavailable. It consists of 60 template solids with parameters controlling both the sketch dimensions, extrusion distances and wires that are extruded. By varying these parameters we can generate solids with different geometries but near-identical topology. A total of 120,000 models, 2000 for each of the 60 topological templates were created. Jayaraman at pg. 5. generating a 3D boundary representation (B-Rep) model of the target 3D object, wherein the generating comprises using the fitted 2D parametric sketch models in a construction sequence to construct the 3D B-Rep model through extrusion into a 3D space, wherein the construction sequence comprises the set of extrusion parameters. These methods produce a sequence of sketch and extrude modeling operations using a neural network and the B-rep is recovered in postprocess with a solid modeling kernel that executes the operations. Jayaraman at pg. 1. The Parametric Variations (PVar) Dataset is synthetically designed for testing SolidGen on the class-conditional generation task, since categorically labeled B-rep datasets are unavailable. It consists of 60 template solids with parameters controlling both the sketch dimensions, extrusion distances and wires that are extruded. By varying these parameters we can generate solids with different geometries but near-identical topology. A total of 120,000 models, 2000 for each of the 60 topological templates were created. Jayaraman at pg. 5. Jayaraman is analogous art to the claimed invention because both are directed to B-rep representations of images and using extrusion parameters to generate B-rep models. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the extrusion and B-rep model generation of Jayaraman with the parametric sketch generation of Clark to result in a system represents sub-embeddings of 2D images that comprise a 3D image as a B-rep model. Motivation to combine includes resulting CAD representations of objects that are “more realistic…than the current state-of-the-art approach.” Jayaraman at pg. 1. Claim 6 Park discloses: wherein the one or more intermediate 2D decoders include the 2D decoder of the 2D autoencoder. This motivates us to use an auto-decoder for learning a shape embedding without an encoder as depicted in Fig. 4. We show that applying an auto-decoder to learn continuous SDFs leads to high quality 3D generative models. Further, we develop a probabilistic formulation for training and testing the auto-decoder that naturally introduces latent space regularization for improved generalization. To the best of our knowledge, this work is the first to introduce the auto-decoder learning method to the 3D learning community. Park at pg. 5. Claim 21 Claim 21 recites a system that performs a method that is substantially the same as the method disclosed in claim 4. Accordingly, for at least the same reasons and based on the same prior art as claim 4, claim 21 is rejected under 35 U.S.C. 103 as being obvious over Kania in view of Park and Jayaraman. Claim 5 is rejected under 35 U.S.C. 103 as being obvious over Kania in view of Yang, Park, Jayaraman, and Chang, et al. (U.S. Patent No. 12,346,641, hereinafter “Chang”). Claim 5 Kania, Yang, Park, and Jayaraman do not appear to disclose: wherein the sub-embedding decoder comprises a multi- layer perceptron (MLP). Chang discloses: wherein the sub-embedding decoder comprises a multi- layer perceptron (MLP). Inspired by GraphNet ([Battaglia 2018]), NeuralSim 204 contains three steps: encoder, propagation, and decoder. The encoder first maps the input node features into the embedding space...Instead of using a standard multi-layer perceptron (MLP) as a decoder, embodiments of the invention may utilize a Structured Decoder (SD) for outputting drift ratios in each story. Chang at col. 5, line 58-col. 6, line 12. The encoder/decoder have an embedding as an intermediate step. Further, the disclosure described an MLP as a standard decoder. Chang, which is analogous art, discloses: Chang is analogous art to the claimed invention because both disclose MLPs as the standard method for decoder construction. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to utilize an MLP, as disclosed by Chang, as the encoder disclosed by Kania. Motivation to combine includes disclosure in Chang indicated that MLPs are the standard structure for the types of decoders disclosed by Kania, thus it would be obvious to try the same structure with the expected results of a decoder that operates in the standard manner. See Chang at col. 5, line 58-col. 6, line 12; col. 6, lines 40-52, indicating the standard use of MLPs as a decoder. Allowable Subject Matter Claims 8-16, 22, and 25 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Behandish, et al., U.S. Pat. App. No. 2022/0113689 Myronenko, et al., U.S. Pat. No. 10,740,901 Sanchez, et al., U.S. Pat. App. No. 2020/0210845 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH MORRIS whose telephone number is (703)756-5735. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH MORRIS Examiner Art Unit 2188 /JOSEPH P MORRIS/Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

May 18, 2022
Application Filed
Sep 29, 2025
Non-Final Rejection — §102, §103
Dec 12, 2025
Applicant Interview (Telephonic)
Dec 12, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Feb 26, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579465
ESTIMATING RELIABILITY OF CONTROL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12560921
MACHINE LEARNING PLATFORM FOR SUBSTRATE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
77%
With Interview (+50.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month