Prosecution Insights
Last updated: April 19, 2026
Application No. 18/305,276

3D RECONSTRUCTION FROM IMAGES

Final Rejection §103
Filed
Apr 21, 2023
Examiner
SHIFERAW, HENOK ASRES
Art Unit
2676
Tech Center
2600 — Communications
Assignee
DASSAULT SYSTEMES
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
1y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
518 granted / 578 resolved
+27.6% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
19 currently pending
Career history
597
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.7%
+32.7% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 578 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Applicant’s Amendments filed on 10/23/2025 has been entered and made of record. Currently pending Claim(s) 1–22 Independent Claim(s) 1, 13, 17 Amended Claim(s) 1, 8, 9, 13, 16, 17 New Claim(s) 21, 22 Response to Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 10/23/2025. In view of the specification amendments [Remarks] filed on 10/23/2025, the drawing objections have been withdrawn. Applicant’s Reply (October 23, 2025) includes substantive amendments to the claims. This Office action has been updated with new grounds of rejection addressing those amendments. Further Applicant’s Arguments/Remarks with respect to independent claims 1, 13, and 17, on the bottom of page 11-17, have been considered but are moot because the arguments are not persuasive. Furthermore, the dependent claims are rejected with 103 rejections based on the new 103 rejections from the independent claims. In addition, Applicant argues, on the top of page 17, the following statement: PNG media_image1.png 371 813 media_image1.png Greyscale The Examiner respectfully disagrees. Uy teaches “the 3D primitive CAD object being defined by sweeping a section along a guide curve, the section being a polygon, a rounded polygon, or a set of one or more curves which forms a closed region, the guide curve being straight line or a continuous curve.” The Examiner interprets the 3D primitive CAD object to be the extrusion cylinder which is defined by parameters including an axis e which is a straight line that extends through the extrusion cylinder, as shown in Figure 3 [Figure 3; pg. 3, left column, Definition 3 (Extrusion Cylinder), first paragraph]. In regards to the section, the sketch is the section since it is a curve that forms a closed region and creates the profile or the area enclosed by the sketch, as shown in Figure 3. Uy explicitly discloses that the sketch is considered to be “a non-self intersecting, finite area, closed loop and normalized 2D sketch” [pg. 3, left column, Definition 1 (Sketch and profile), first paragraph]. Furthermore, Applicant argues, on page 17, that Zou does not cure the deficiencies of Uy and Zou for the previous rejection of claim 8 and 9. The Examiner respectfully disagrees. In regards to claim 1, due to Applicant’s amendments, independent claims 1, 13, and 17 are rejected under 103 with Zou et al. (“3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks”) in view of newly cited art Jia et al. (“Real-time 3D reconstruction method based on monocular vision”) and Uy et al. (“Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders”), as detailed in the rejection below. Zou discloses the 3D reconstruction including the neural network and depth image of the independent claims 1, 13, and, 17, including the newly amended limitations by applying the neural network separately [Figure 2; pg. 903, right column, Depth map encoder, second paragraph; pg. 904, left column, Recurrent generator, first paragraph] while Jia teaches the “obtaining” and “segmenting” limitations [Figure 1-3; pg. 3-4; 2.1. Framework, first paragraph; pg. 4, 2.2. Visual Information Segmentation and Extraction first paragraph]. Furthermore, Uy discloses “the 3D primitive CAD object being defined by sweeping a section along a guide curve” limitation, as described above. Although Applicant argues, on bottom of page 15, that Uy “is directed to reverse engineering from a raw geometry, namely a 3D point cloud” and “there would be no reason to combine the teaching of Uy of the input 3D point cloud with other formats of images to achieve a 3D reconstruction of a real object,” the Examiner respectfully disagrees. Uy is directed towards generating a 3D reconstruction with reverse engineering with an initial input to a CAD model [pg. 1, Abstract]. Under the broadest reasonable interpretation, it would be reasonable to combine Uy with Zou in view of Jia because all references are directed towards generating a 3D reconstruction with an initial input, whether it be an RGB image, depth image, or point cloud. In addition, in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. For claim 8, Zou discloses the limitations of “obtaining a dataset of training samples each including a respective depth image and a ground truth 3D primitive CAD object” and “training the neural network based on the dataset” [Figure 8-9; pg. 907, left column, Real data (NYU Depth V2), first paragraph] with both synthetic data and real data (NYU Depth V2). In particular, the NYU Depth V2 training dataset is used to test the model, which includes the depth map as shown in Figure 9, and the ground truth data is labelled by Guo and Hoiem. For claim 9, Zou also discloses the limitations of “synthesizing 3D primitive CAD objects” and “generating a respective depth image of each synthesized 3D primitive CAD object” by creating synthetic depth maps from training meshes [pg. 906, left column, Synthetic data, first paragraph]. Therefore, under the broadest reasonable interpretation, the combination of Zou in view of Jia and further in view of Uy discloses the limitations of independent claims 1, 13, and 17. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–20 are rejected under 35 U.S.C. 103 as being unpatentable over Zou et al. (Zou, Chuhang, et al. "3d-prnn: Generating shape primitives with recurrent neural networks." Proceedings of the IEEE International Conference on Computer Vision. 2017) (hereafter, “Zou”) in view of Jia et al. (Jia, Qingyu, et al. "Real-time 3D reconstruction method based on monocular vision." Sensors 21.17 (2021): 5909) (hereafter, “Jia”) and further in view of Uy et al. (Uy et al. "Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders." arXiv preprint arXiv:2112.09329 (2021)) (hereafter, “Uy”). Regarding claim 1, Zou discloses [a computer-implemented method] of 3D reconstruction of at least one real object including an assembly of parts [we present 3D-PRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives, pg. 900, Abstract], the 3D reconstruction method comprising: obtaining a neural network [we propose 3D-PRNN, a generative recurrent neural network to accomplish this task, pg. 903, right column, 4. 3D-PRNN: 3D Primitive Recurrent Neural Networks, first paragraph] configured for generating a 3D primitive CAD object based on an input depth image [the network gets as input a single depth image and sequentially predicts primitives to form a 3D shape, pg. 903, right column, 4.1. Network Architecture, first paragraph], [the 3D primitive CAD object being defined by sweeping a section along a guide curve, the section being a polygon, a rounded polygon, or a set of one or more curves which forms a closed region, the guide curve being straight line or a continuous curve]; applying the neural network to each segment separately [Figure 2; we apply the Long Short-Term Memory (LSTM) unit inside the recurrent generator ... the prediction unit consists of L layers of recurrently connected hidden layers (we set L = 3, which is found to be sufficient to model the complex primitive distributions) that encode both the depth feature d and the previously predicted primitive xt-1 and then computes the output vector, yt. Yt is used to parameterize a predictive distribution Pr(xt|yt) over the next possible primitive xt, pg. 904, left column, Recurrent generator, first paragraph ... at each time step t, the distribution of the next primitive is predicted as yt, pg. 904, right column, Recurrent generator, second paragraph], each separate application of the neural network resulting in a respective single 3D primitive CAD object [Figure 2; during each iteration, we randomly initialize 10 primitives, optimize Eq. 2 for each of these primitives and add the best fitting primitive to our primitive collection, pg. 903, left column, 3.2. Optimization, first paragraph]; and after applying the neural network to each segment separately and thereby obtaining multiple 3D primitive CAD objects, combining the 3D primitive CAD objects obtained from each segment in order to construct the 3D reconstruction of the real object [Zou, Figure 2; the complete 3D shape is then predicted using a single depth map as input to 3D-PRNN. Our model can generate a sampling of complete shapes that match the input depth, as well as the most likely configuration, pg. 906, left column, Synthetic data, first paragraph]. Zou fails to explicitly disclose a computer-implemented method [of 3D reconstruction of at least one real object including an assembly of parts, the 3D reconstruction of at least one real object including an assembly of parts, the 3D reconstruction method comprising: obtaining a neural network configured for generating a 3D primitive CAD object based on an input depth image,] the 3D primitive CAD object being defined by sweeping a section along a guide curve, the section being a polygon, a rounded polygon, or a set of one or more curves which forms a closed region, the guide curve being straight line or a continuous curve; obtaining (i) a natural image color or grayscale photograph displaying a real-world scene including the real object and (ii) a depth image representing the real object; segmenting the depth image based at least on the natural image color or grayscale photograph, each segment representing at most a respective part of the assembly. However, Jia teaches a computer-implemented method [of 3D reconstruction of at least one real object including an assembly of parts, the 3D reconstruction of at least one real object including an assembly of parts] [this experiment is based on Windows 64-bit platform ... openCV is used to process image data, and OpenGL is used for real-time reconstruction and visualization ... the RGB-D camera is connected to a 3.2 GHz i7-8700 CPU, 16.0 G RAM, and Nvidia GTX 1660 graphics computer, pg. 9-10, 3.1. Experimental Setting, first paragraph]; obtaining (i) a natural image color or grayscale photograph displaying a real-world scene including the real object [Figure 1-2; a single RGB-D camera is used to collect visual information, pg. 3, 2.1. Framework, first paragraph ... as shown in Figure 2, the RGB image collected in real time, pg. 4-5, 2.2. Visual Information Segmentation and Extraction, first paragraph] and (ii) a depth image representing the real object [Figure 1; the depth image collected by the RGB-D camera, pg. 7, 2.3.2. Simultaneous Estimation of Three-Dimensional Values Using ResNet-152 Network, third paragraph]; segmenting the depth image based at least on the natural image color or grayscale photograph [Figure 3; We will jointly encode the RGB’ image and 2DM image, which are segmented by the YOLACT++ network, and the Depth image collected by the RGB-D camera into the ResNet-152 network, pg. 7, 2.3.2. Simultaneous Estimation of Three-Dimensional Values Using ResNet-152 Network, third paragraph ... D + M (2DM and Depth) is the basic effective channel used for depth recovery, pg. 10, 3.3. VJTR Method Realization and Results, second paragraph], each segment representing at most a respective part of the assembly [Figure 7; two objects can still be distinguished well in the reconstruction view of Figure 7d,f. It shows that, even if objects are placed closely or even stacked in the RGB-D image, their 3D reconstruction points are distributed correctly in space, and their spatial relationship in the real 3D space is also well reflected, pg. 13, 3.5. Experimental Results, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou by incorporating the teachings of Jia with segmentation to reduce reconstruction errors, as recognized by Jia. Neither Zou nor Jia appears to explicitly disclose [obtaining a neural network configured for generating a 3D primitive CAD object based on an input depth image,] the 3D primitive CAD object being defined by sweeping a section along a guide curve, the section being a polygon, a rounded polygon, or a set of one or more curves which forms a closed region, the guide curve being straight line or a continuous curve. However, Uy teaches [obtaining a neural network configured for generating a 3D primitive CAD object based on an input depth image,] the 3D primitive CAD object being defined by sweeping a section along a guide curve [Uy, Figure 3; the Extrusion Cylinder, a primitive that gives us the flexibility of creating any shape from arbitrary closed loops, pg. 3, left column, 3. The Extrusion Cylinder, first paragraph ... we use extrusions to parameterize our primitive, the extrusion cylinder, by an axis e ∈ S2, a center c ∈ R3 associated to a sketch S scaled by s ∈ R2, pg. 3, left column, Definition 3 (Extrusion Cylinder), first paragraph], the section being a polygon, a rounded polygon, or a set of one or more curves which forms a closed region [Uy, Figure 3; we consider a non-self intersecting, finite area, closed loop and normalized 2D sketch S ... the area enclosed by S is often called a profile, pg. 3, left column, Definition 1 (Sketch and profile), first paragraph], the guide curve being straight line or a continuous curve [Uy, Figure 3; we use extrusions to parameterize our primitive, the extrusion cylinder, by an axis (the examiner interprets an axis to be a guide curve) e ∈ S2, pg. 3, left column, Definition 3 (Extrusion Cylinder), first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia by incorporating the teachings of Uy to ensure more expressivity with shapes, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 1. Regarding claim 2, which claim 1 is incorporated, Zou discloses wherein the neural network includes a convolutional neural network (CNN) that takes the depth image as input and outputs a respective latent vector [Figure 5a; each input depth map, I, is first resized to be 64x64 in dimensions ... I is passed to an encoder which consists of stacks of convolutional and LeakyRelu layers, pg. 903, right column, Depth map encoder, first paragraph ... the output 1 x 32 feature vector d, pg. 904, left column, Depth map encoder, first paragraph], and a sub-network that takes the respective latent vector as input and outputs values of a predetermined 3D primitive CAD object parameterization [the output 1 x 32 feature vector d is then sent to the recurrent generator to predict a sequence of primitives, pg. 904, left column, Depth map encoder, first paragraph ... the prediction unit consists of L layers of recurrently connected hidden layers (we set L = 3, which is found to be sufficient to model the complex primitive distributions) that encode both the depth feature d and the previously predicted primitive xt-1 and then computes the output vector, yt. Yt is used to parameterize a predictive distribution Pr(xt|yt) over the next possible primitive xt, pg. 904, left column, Recurrent generator, first paragraph]. Regarding claim 3, which claim 1 is incorporated, Zou discloses the neural network comprises a recurrent neural network (RNN) [configured to output a value for the list of positional parameters and the list of line types] [the output 1x32 feature vector d is then sent to the recurrent generator to predict a sequence of primitives, pg. 904, Depth map encoder, first paragraph]. Neither Zou nor Jia appears to explicitly disclose wherein the 3D primitive CAD object is defined by a section and an extrusion, the section being defined by a list of positional parameters and a list of line types, and [the neural network comprises a recurrent neural network (RNN)] configured to output a value for the list of positional parameters and the list of line types. However, Uy teaches wherein the 3D primitive CAD object is defined by a section [Figure 3; we consider a non-self intersecting, finite area, closed loop and normalized 2D sketch S ... the area enclosed by S is often called a profile, pg. 3, Definition 1 (Sketch and profile), first paragraph] and an extrusion [Figure 3; extrusion is the process of pushing the material forward along a fixed cross-sectional profile to a desired height ... we use extrusions to parameterize our primitive, pg. 3, Definition 3 (Extrusion Cylinder), first paragraph], the section being defined by a list of positional parameters and a list of line types [Figure 3; we consider a non-self intersecting, finite area, closed loop and normalized 2D sketch S = {p(q(t)) ∈ R2 |t ∈ [0, 1], p(q(0)) = p(q(1))}, for continuous functions q : [0, 1] → R and p : R → R2. The area enclosed by S is often called a profile (the examiner interprets a profile to be a line type) ... we also define the plane containing S, parameterized by the center (the examiner interprets the center to be a positional parameter) c ∈ R3 ... note that the sketch S defines a profile on the sketch plane parameterized by (c,e) without ambiguity ... we represent the sketch implicitly, by learning the parameters β of an encoder function fβ(Sk) ∈ RD that maps the 2D point cloud into a global, normalized sketch latent space, pg. 3, Definition 1 (Sketch and profile), first paragraph; Definition 2 (Sketch plane), first paragraph, second paragraph; pg. 5, Inferring sketches, first paragraph], and [the neural network comprises a recurrent neural network (RNN)] configured to output a value for the list of positional parameters and the list of line types [this latent code acts as the condition of a decoder S : (RD × R2) → R mapping (r ∈ R2) to its signed distance value to the underlying normalized sketch Sk: d(Sk, r) ≈ S(f(Sk), r), pg. 5, Inferring sketches, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jiang by incorporating the teachings of Uy to represent the sketch implicitly and ensure the parameters can yield meaningful sketches, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 3. Regarding claim 4, which claim 3 is incorporated, neither Zou near Jia appears to explicitly disclose wherein the neural network further includes a fully connected layer that outputs value of one or more parameters defining the extrusion. However, Uy teaches wherein the neural network further includes a fully connected layer [this feature vector is then passed through two separate fully connected branches to obtain instance and base/barrel segmentation M as well as normal N, pg. 6, 4.3 Network Details, first paragraph] that outputs value of one or more parameters [we can directly use M in order to solve for the parameters of each extrusion cylinder, pg. 5, Theorem 3, first paragraph] defining the extrusion [we use extrusions to parameterize our primitive, the extrusion cylinder, by an axis e ∈ S2, a center c ∈ R3 associated to a sketch S scaled by s ∈ R2. We further introduce the extents (rmin, rmax) ∈ (R × R) defining the extrusion E = (e, c, S, s, rmin, rmax) ... given predicted geometric proxies (M,N), we establish a differentiable and closed-form formulation to estimate other extrusion parameters, pg. 3, Definition 3 (Extrusion Cylinder), first paragraph; pg. 4, 4.1 Inferring Extrusion Cylinder Parameters, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jiang by incorporating the teachings of Uy to ensure that the parameters yield meaningful sketches, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 4. Regarding claim 5, which claim 4 is incorporated, neither Zou nor Jia appears to explicitly disclose wherein the section is further defined by a number representing a type of the section, the neural network being further configured to compute a vector representing a probability distribution for the number. However, Uy teaches wherein the section is further defined by a number representing a type of the section [we represent the sketch implicitly, by learning parameters β of an encoder function ... that maps the 2D point cloud into a global, normalized sketch latent space. This latent code acts as the condition of a decoder S: (RD X R2) → R mapping (r ∈ R2) to its signed distance value to the underlying normalized sketch, pg. 5, Inferring sketches, first paragraph], the neural network being further configured to compute a vector representing a probability distribution for the number [Figure A6; we show how to predict the sketch representation S ... This latent code acts as the condition of a decoder S: (RD X R2) → R mapping (r ∈ R2) to its signed distance value to the underlying normalized sketch, pg. 5, Inferring sketches, first paragraph], and, optionally, the outputting of the value for the one or more parameters defining the extrusion, the list of positional parameters, and/or for the list of line types, is further based on the vector representing the probability distribution [extrusion (the examiner interprets the claim to require one parameter recited): now given predicted geometric proxies (M,N), we establish a differentiable and closed-form formulation to estimate other extrusion parameters. M compactly and jointly combined the predicted probability of a point 1) being either a base or a barrel, and 2) belonging to a certain segment. We then apply a row-wise softmax turning M into a row-stochastic matrix, pg. 4, 4.1. Inferring Extrusion Cylinder Parameters, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jiang by incorporating the teachings of Uy for accurate classification along the surface of the extrusion, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 5. Regarding claim 6, which claim 5 is incorporated, Zou discloses the CNN being configured to take the depth image as input and to output a respective latent vector [Figure 5a; each input depth map, I, is first resized to be 64x64 in dimensions ... I is passed to an encoder which consists of stacks of convolutional and LeakyRelu layers, pg. 903, right column, Depth map encoder, first paragraph ... the output 1 x 32 feature vector d, pg. 904, left column, Depth map encoder, first paragraph]. Neither Zou nor Jia appears to explicitly disclose a second subpart which is configured to take the respective latent vector of the CNN as input and to output the vector representing a probability distribution for the number; a second part including: a third subpart which is configured to take as input a concatenation of the respective latent vector of the CNN and the vector representing the probability distribution, and to output a respective vector; a fourth subpart which is configured to take as input the respective vector of the third subpart and to output a value for the list of positional parameters, a value for the list of line types, and a fixed-length vector, and a fifth subpart which is configured to take as input a concatenation of the respective vector of the third subpart and the respective fixed-length vector of the fourth subpart, and to output a value for the one or more parameters defining the extrusion. However, Uy teaches a second subpart which is configured to take the respective latent vector of the CNN as input and to output the vector representing a probability distribution for the number [Figure A6; this feature vector is then passed through two separate fully connected branches to obtain instance and base/barrel segmentations M as well as normal N ... now given predicted geometric proxies (M,N), we establish a differentiable and closed-form formulation to estimate other extrusion parameters, pg. 6, 4.3. Network Details, first paragraph; pg. 4, 4.1. Inferring Extrusion Cylinder Parameters, first paragraph]; a second part including: a third subpart which is configured to take as input a concatenation of the respective latent vector of the CNN and the vector representing the probability distribution [Figure A6; now given predicted geometric proxies (M,N), we establish a differentiable and closed-form formulation to estimate other extrusion parameters, pg. 4, 4.1. Inferring Extrusion Cylinder Parameters, first paragraph], and to output a respective vector [Figure A6; M compactly and jointly combines the predicted probability of a point 1) being either a base or a barrel, and 2) belonging to a certain segment, pg. 4, 4.1 Inferring Extrusion Cylinder Parameters, first paragraph]; a fourth subpart which is configured to take as input the respective vector of the third subpart and to output a value for the list of positional parameters, a value for the list of line types, and a fixed-length vector [Figure A6; we then apply a row-wise softmax turning M into a row-stochastic matrix ith row indicates the belonging of point pi to one of the 2K-classes, pg. 4, 4.1. Inferring Extrusion Cylinder Parameters, first paragraph], and a fifth subpart which is configured to take as input a concatenation of the respective vector of the third subpart and the respective fixed-length vector of the fourth subpart, and to output a value for the one or more parameters defining the extrusion [Figure A6; once predicted, we can directly use M in order to solve for the parameters of each extrusion cylinder, pg. 5, Theorem 3, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia by incorporating the teachings of Uy to learn geometric proxies and ensure compact parameterization, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 6. Regarding claim 7, which claim 1 is incorporated, Zou fails to explicitly disclose removing outliers from the segment; and/or recentering the segment. However, Jia teaches removing outliers from the segment; and/or recentering the segment [we propose an outlier adjustment method based on cluster center distance constrained, pg. 8, 2.4. Reconstruction Error Correction, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Uy by incorporating the teachings of Jia for higher reconstruction accuracy, as recognized by Jia. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 7. Regarding claim 8, which claim 1 is incorporated, Zou discloses obtaining a dataset of training samples each including a respective depth image and ground truth 3D primitive CAD object [Figure 8 & 9; we also test our model on NYU Depth V2 dataset ... we employ the ground truth data labelled by Guo and Hoiem, pg. 907, left column, Real data (NYU Depth V2), first paragraph]; and training the neural network based on the dataset [we fine-tune our network that was trained on synthetic data using the training set of NYU Depth V2, pg. 907, left column, Real data (NYU Depth V2), first paragraph]. Regarding claim 9, which claim 8 is incorporated, Zou discloses synthesizing 3D primitive CAD objects; and generating a respective depth image of each synthesized 3D primitive CAD object [we project synthetic depth maps from training meshes, pg. 906, left column, 5.3. Shape Reconstruction from Single Depth View, first paragraph]. Regarding claim 10, which claim 9 is incorporated, neither Zou nor Jia appears to explicitly disclose rendering the synthesized 3D primitive CAD object with respect to a virtual camera thereby obtaining a set of pixels, and, optionally, the synthesized 3D primitive CAD object is subjected to one or more transformation before the rendering. However, Uy teaches rendering the synthesized 3D primitive CAD object with respect to a virtual camera thereby obtaining a set of pixels [Figure 5; how to recover the parameters of an extrusion cylinder E from a set of points P = { p i ∈ R 3 } i = 1 N and corresponding normals N = { n i ∈ s 2 } i = 1 N incident to E. We let Pbase, Pbarr ⊂ P denote base and barrel points of P, respectively, where P = Pbase ∪ Pbarr. The center of the extrusion (c) is the simplest and can be estimated by the taking the mean of all the barrel points of P, pg. 3, Recovering extrusion cylinder from points, first paragraph], and, optionally, the synthesized 3D primitive CAD object is subjected to one or more transformation before the rendering [the 3D transformation and scale are obtained from the corresponding extrusion cylinder parameters. We also refine our segmentation prediction by a simple post-filtering and use robust methods in estimation of the scale and extent, pg. 7, Reverse engineering, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia by incorporating the teachings of Uy to be able to handle cases where segments share the same extrusion axis, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 10. Regarding claim 11, which claim 10 is incorporated, neither Zou nor Jia appears to explicitly disclose adding a random noise to at least part of the pixels. However, Uy teaches adding a random noise to at least part of the pixels [we further experiment on adding noise to the input point clouds at both training and test time. We randomly perturb the points along the normal direction with a uniform noise between [−σ, σ], pg. 16, A.6.2 Ablation on Noisy Data, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia by incorporating the teachings of Uy to determine if the network can handle noise without reconstruction performance decreasing, as recognized by Uy. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Uy with Zou and Jia to obtain the invention as specified in claim 11. Regarding claim 12, which claim 10 is incorporated, Zou teaches adding a random occlusion to at least part of the pixels [given a point cloud representation of a shape, our approach finds the most plausible primitives to fit in a sequential manner ... the algorithm might identify the primitive that fits to the top surface first and then the legs successively. We use rectangular cuboids as primitives, pg. 902, 3. Fitting Primitives from Point Clouds, first paragraph]. Regarding claim 13, (drawn to a non-transitory computer readable storage medium) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 1 renders obvious the steps of the non-transitory computer readable storage medium claim 13, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 1 is equally applicable to claim 13. Regarding claim 14, (drawn to a non-transitory computer readable storage medium) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 2 renders obvious the steps of the non-transitory computer readable storage medium claim 14, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 2 is equally applicable to claim 14. Regarding claim 15, (drawn to a non-transitory computer readable storage medium) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 3 renders obvious the steps of the non-transitory computer readable storage medium claim 15, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 3 is equally applicable to claim 15. Regarding claim 16, (drawn to a non-transitory computer readable storage medium) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 8 renders obvious the steps of the non-transitory computer readable storage medium claim 16, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 8 is equally applicable to claim 16. Regarding claim 17, (drawn to a system) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 1 renders obvious the steps of the system 17, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 1 is equally applicable to claim 17. Regarding claim 18, (drawn to a system) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 2 renders obvious the steps of the system claim 18, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 2 is equally applicable to claim 18. Regarding claim 19, (drawn to a system) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 3 renders obvious the steps of the system claim 19, because these steps occur in the operation of the method as discussed above. Thus, the arguments similar to that presented above for claim 3 is equally applicable to claim 19. Regarding claim 20, (drawn to a system) the proposed combination of Zou in view of Jia and further in view of Uy explained in the rejection of computer-implemented method claim 8 renders obvious the steps of the system claim 20, because these steps occur in the operation of the non-transitory computer readable storage medium as discussed above. Thus, the arguments similar to that presented above for claim 8 is equally applicable to claim 20. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Zou ("3d-prnn: Generating shape primitives with recurrent neural networks”) in view of Jia ("Real-time 3D reconstruction method based on monocular vision”) and further in view of Uy ("Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders”), as applied above, and Gilboa (US 2020/0380685 A1) (hereafter, “Gilboa”). Regarding claim 21, which claim 1 is incorporated, neither Zou nor Jia appears to explicitly disclose wherein segmenting the depth image based at least on the color or grayscale photograph includes obtaining an edges image by applying an edge-detection method to the color or grayscale photograph. However, Gilboa teaches wherein segmenting the depth image based at least on the color or grayscale photograph includes obtaining an edges image by applying an edge-detection method to the color or grayscale photograph [the relation between RGB edges and depth discontinuities may be considered. Accordingly, given the set of RGB boundaries (edges) Brgb ⊂ Ω, and depth boundaries Bd⊂ Ω ... the set Brgb (RGB boundaries) is calculated for each image by using, e.g., known edge-detection methods suited for, e.g., natural images, para 0041, 0042]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia and further in view of Uy by incorporating the teachings of Gilboa to predict the probability of depth discontinuities, as recognized by Gilboa. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine He with Zou, Jia, and Uy to obtain the invention as specified in claim 21. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Zou ("3d-prnn: Generating shape primitives with recurrent neural networks”) in view of Jia ("Real-time 3D reconstruction method based on monocular vision”) and further in view of Uy ("Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders”), as applied above, and He et al. (He, Qian, et al. "Single image 3d object estimation with primitive graph networks." Proceedings of the 29th ACM International Conference on Multimedia. 2021) (hereafter, “He”). Regarding claim 22, which claim 1 is incorporated, neither Zou, Jia, nor Uy appears to explicitly disclose wherein the combining includes a snapping method, the snapping method includes displacement of one or more generated 3D primitive CAD objects relative to each other in a virtual scene and/or the snapping method including defining a relation between one or more generated 3D primitive CAD objects. However, He teaches wherein the combining includes a snapping method, the snapping method includes displacement of one or more generated 3D primitive CAD objects relative to each other in a virtual scene and/or the snapping method including defining a relation between one or more generated 3D primitive CAD objects [displacement: Figure 1-2; we first match all primitives into pairs according to their L1 distance. Each time we match a pair with minimum distance in the remaining unpaired primitives, pg. 2356, right column, 3.4 Primitive Reasoning Network, second paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Zou in view of Jia and further in view of Uy by incorporating the teachings of He to generate non-overlapping primitives, as recognized by He. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine He with Zou, Jia, and Uy to obtain the invention as specified in claim 22. Conclusion The art made of record and not relied upon is considered pertinent to applicant's disclosure: Im2Struct: Recovering 3D Shape Structure from a Single RGB Image to Niu et al. discloses a method to recover 3D shapes from an RGB image. Segmenting Unknown 3D Objects from Real Depth Images using Mask R-CNN Trained on Synthetic Data to Danielczuk et al. discloses a method for generating a synthetic training dataset of depth images and object marks using simulated 3D CAD models and training a Mask R-CNN. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOLUWANI MARY-JANE IJASEUN whose telephone number is (571)270-1877. The examiner can normally be reached Monday - Friday 7:30AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOLUWANI MARY-JANE IJASEUN/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Apr 21, 2023
Application Filed
Jul 21, 2025
Non-Final Rejection — §103
Oct 22, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597117
METHOD, PROGRAM, APPARATUS, AND SYSTEM FOR ABNORMALITY DETECTION SUCH AS FOR DETERMINING WHETHER A PLURALITY OF CONTAINERS TO BE STACKED ON A PALLET IS NORMAL OR ABNORMAL
2y 5m to grant Granted Apr 07, 2026
Patent 12555231
DETECTING ISCHEMIC STROKE MIMIC USING DEEP LEARNING-BASED ANALYSIS OF MEDICAL IMAGES
2y 5m to grant Granted Feb 17, 2026
Patent 12536796
REMOTE SOIL AND VEGETATION PROPERTIES DETERMINATION METHOD AND SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12525056
METHOD AND DEVICE FOR MULTI-DNN-BASED FACE RECOGNITION USING PARALLEL-PROCESSING PIPELINES
2y 5m to grant Granted Jan 13, 2026
Patent 12499506
INFERENCE MODEL CONSTRUCTION METHOD, INFERENCE MODEL CONSTRUCTION DEVICE, RECORDING MEDIUM, CONFIGURATION DEVICE, AND CONFIGURATION METHOD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
91%
With Interview (+1.5%)
1y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 578 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month