Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,264

TECHNIQUES FOR GENERATING THREE-DIMENSIONAL REPRESENTATIONS OF ARTICULATED OBJECTS

Non-Final OA §101§103
Filed
Jun 11, 2024
Examiner
PROTAZI, BRIGITER DIVULALE
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
33.3%
-6.7% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/06/2024 is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "106" and "113" have both been used to designate Communication path. Reference characters "120" and "121" have both been used to designate Add-In Card(s). The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “112” has been used to designate both Processor and Parallel Processing Subsystem. Reference character “116” has been used to designate both Switch and 3D representation application. Further revision is required, additional typos should be fixed accordingly. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: “Computing device” is referenced by different numbers in Detailed Description: 100, 140. “Communication path” is referenced by different numbers in Detailed Description: 106, 113. “Computing device 140” mentioned in Detailed Description but not found in Drawings. “CPUs 102” mentioned in Detailed Description but not found in Drawings. Reference numbers “402” and “404” in Drawings Fig. 4 are not mentioned in Detailed Description. Appropriate correction is required. 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, requires the specification to be written in “full, clear, concise, and exact terms.” The specification is replete with terms which are not clear, concise and exact. The specification should be revised carefully in order to comply with 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112. Examples of some unclear, inexact or verbose terms used in the specification are: Many important terms that are cited in Drawings and in the Detailed Description a couple of times, are not cited throughout the rest of paragraphs in the Detailed Description. In paragraph 0022, it discloses and labels “processor” with reference number 112. Then, further down in paragraph 0088 and on, it mentions “processor” multiple times, with no proper label. As well as many other paragraphs without a proper label for “processor” when mentioned. This goes for many different terms throughout the detailed disclosure that do not have a proper label reference. Many inconsistent label numberings for terms that were previously labeled. For example, “Computing device” is referenced by different numbers, 100, 140. This goes for many different terms throughout the detailed disclosure that have inconsistent reference numbering. Further revision is required, additional typos should be fixed accordingly. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because claim 1 recites: “A computer-implemented...” and the body of the claim recites computer program steps, such as, “performing one or more operations to generate…”, which are nothing more than just programmed instructions to be performed by the system. Therefore, the steps recited in claim 1 are non- statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs, are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized, and is thus statutory. Accordingly, it is important to distinguish claims that define descriptive material per se from claims that define statutory inventions. After Subject Matter Eligibility (SME) analysis, the examiner concludes that, Step 1: The claimed invention is a process which falls within one of the four statutory categories of invention (process, machine, manufacture or composition of matter). The claim(s) goes on to recite(s) the following: A computer-implemented method for generating an articulation model, the method comprising: receiving a first set of images of an object in a first articulation and a second set of images of the object in a second articulation; performing one or more operations to generate first three-dimensional (3D) geometry based on the first set of images; performing one or more operations to generate second 3D geometry based on the second set of images; and performing one or more operations to generate an articulation model of the object based on the first 3D geometry and the second 3D geometry. After Subject Matter Eligibility (SME) analysis, the examiner concludes that, Step 2A Prong One: The idea of performing one or more operations to generate something is an idea having no particular concrete or tangible form. This concept falls within one of the three grouping mention in the 2019 PEG guidance. This is directed to a mental process. The limitation of performing one or more operations to generate, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computer-implemented” nothing in the claim element precludes the step from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Therefore, the claims are directed to an abstract idea. (Step 2A Prong One: Yes) After Subject Matter Eligibility (SME) analysis, the examiner concludes that, Step 2A Prong Two: This judicial exception is not integrated into a practical application. The “computer-implemented” is recited at a high-level of generality (i.e., as a generic computer processor performing a generic computer function based on a determination based on features) such that it amounts no more than mere instructions to apply the exception using a generic computer component. (Step 2A Prong Two: No) After Subject Matter Eligibility (SME) analysis, the examiner concludes that, Step 2B: The claims 1-10 do not include additional elements, taken individually and as a combination, that are sufficient to amount to significantly more than the judicial exception because the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. When examining the limitations, the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea. (Step 2B: No). Accordingly, the claims are directed to an abstract idea and are rejected as ineligible for patenting under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Mora (MORA (No. US-20150178988-A1 “Mora”) in view of CHENG (No. JP-2019126705-A “Cheng”). Regarding claim 1, Mora teaches “A computer-implemented method for generating an articulation model, the method comprising:” (computer-implemented method for determining a shape of a 3D object from imagery); (a method for generating a realistic 3D reconstruction model for an object; 0001); “performing one or more operations to generate first three-dimensional (3D) geometry based on the first set of images;” (generating a mesh of said an object or being from said sequence of images captured; 0049); “performing one or more operations to generate second 3D geometry based on the second set of images; and” (generating a mesh of said an object or being from said sequence of images captured; 0049); However, Mora does not teach “receiving a first set of images of an object in a first articulation and a second set of images of the object in a second articulation;”. Cheng teaches “receiving a first set of images of an object in a first articulation and a second set of images of the object in a second articulation;” (obtaining image information of a captured first set of images and an Nth set of images … N is an integer of 2 or more; 0005); (the first set of images and the Nth set of Superimposing the image on the image; 0005); The motivation for the above is to have accessible images for a more efficient and accurate 3D generation of articulation model. Mora and Cheng are analogous art as both of them are related to image processing for 3D articulation model generation. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora by receiving a first set of images of an object in a first articulation and a second set of images of the object in a second articulation by Cheng and use that with Mora’s generation of a 3D articulation model. Regarding claim 2, Cheng fails to teach all of claim 2. However, Mora teaches “The computer-implemented method of claim 1, wherein performing one or more operations to generate the first 3D geometry comprises:” (generating a mesh of said an object or being from said sequence of images captured; 0049); “performing one or more operations to generate a first model of the object in the first articulation based on the first set of images; and” (generating a complete 3D object model from a set of images; 0005); “performing one or more operations to generate the first 3D geometry based on the first model.” (generating a realistic 3D reconstruction model for an object; 0047); The motivation for the above is to have an efficient and accurate generation of an articulated object when the operations are performed with the images inputted. Claim(s) 3-8, and 11-18 are rejected under 35 U.S.C. 103 as being unpatentable under Mora in view of Cheng and in further view of SUN (No. CN-115769259-A “Sun”). Regarding claim 3, Mora and Cheng fail to teach all of claim 3. However, Sun teaches “The computer-implemented method of claim 2, wherein performing one or more operations to generate first model comprises performing one or more iterative operations to update parameters of at least one machine learning model included in the first model based on the first set of images.” (model trainer 160 that trains machine-learned model; Pg.6, Para 8); (update parameters iteratively over multiple training iterations; Pg.6, Para 8); The motivation for the above is to combine for an efficient and accurate generation of an articulated object when the operations are performed with machine learning model. Mora, Cheng and Sun are analogous art as they are related to image processing for 3D articulation model generation. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the performing of one or more operations to generate first model comprises performing one or more iterative operations to update parameters of at least one machine learning model included in the first model based on the first set of images as taught by Sun. Regarding claim 4, Mora and Cheng fail to teach all of claim 4. However, Sun teaches “The computer-implemented method of claim 2, wherein the first model comprises a first machine learning model associated with geometry of the object and a second machine learning model associated with an appearance of the object.” (each application can communicate with the central intelligence layer (and the models stored therein); Pg.7, Para 10); (The central intelligence layer includes many machine learning models. For example, as shown in FIG. 1C, a corresponding machine learning model may be provided for each application; Pg.7, Para 11); The models stored in the central intelligence layer can be the geometry of the object. The central intelligence layer also includes machine leaning models, that can include a first and second learning model associated with the object. The motivation for the above is to combine for an efficient and accurate generation of an articulated object when utilizing machine learning. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the first model comprises a first machine learning model associated with geometry of the object and a second machine learning model associated with an appearance of the object as taught by Sun. Regarding claim 5, Mora and Cheng fail to teach all of claim 5. However, Sun teaches “The computer-implemented method of claim 2, wherein performing one or more operations to generate the first 3D geometry based on the first model comprises performing one or more operations of a reconstruction technique.” (Fig 2. Depicts 3D reconstruction technique); (as a result of receiving the images 204, provide The reconstructed 3D model 206 of the object of interest; Pg.7, Para 2); The motivation for the above is to combine for an efficient and accurate generation of an articulated object, utilizing a reconstruction technique. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the performing of one or more operations to generate the first 3D geometry based on the first model comprises performing one or more operations of a reconstruction technique as taught by Sun. Regarding claim 6, Mora and Cheng fail to teach all of claim 6. However, Sun teaches “The computer-implemented method of claim 1, wherein the articulation model comprises a segmentation model that segments a plurality of parts of the object and a set of motion parameters defining one or more motions of each part included in the plurality of parts.” (an analysis-by-synthesis strategy and forward render contour, optical flow, and/or color images that are compared to video observations to adjust the model's camera, shape, and/or motion parameters. The proposed technique is able to accurately reconstruct rigid and non-rigid 3D shapes; Pg.5, Para 1); The utilization of motion parameters lets the object be reconstructed with rigid/non-rigid parts of the object which relate to the plurality of parts of the object and its motion. The motivation for the above is to combine for an efficient and accurate generation of an articulated object. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying wherein the articulation model comprises a segmentation model that segments a plurality of parts of the object and a set of motion parameters defining one or more motions of each part included in the plurality of parts as taught by Sun. Regarding claim 7, Mora and Cheng fail to teach all of claim 7. However, Sun teaches “The computer-implemented method of claim 6, wherein performing one or more operations to generate the articulation model comprises performing one or more backpropagation operations to update the set of motion parameters and one or more parameters of the segmentation model.” (can be backpropagated through the model to update one or more parameters of the model; Pg.6, Para 9); (performing backpropagation of errors; Pg.6, Para 10); Updating the parameters of the model can be the motion and segmentation model parameters that are backpropagated. The motivation for the above is to combine for an efficient and accurate generation of an articulated object with the backpropagation. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the performing of one or more operations to generate the articulation model comprises performing one or more backpropagation operations to update the set of motion parameters and one or more parameters of the segmentation model as taught by Sun. Regarding claim 8, Mora and Cheng fail to teach all of claim 8. However, Sun teaches “The computer-implemented method of claim 7, wherein the one or more backpropagation operations minimize a loss function that comprises at least one of a consistency loss term that penalizes inconsistencies between corresponding points in the first articulation and the second articulation, a matching loss term that penalizes unmatching image features between pixel pairs the first articulation and the second articulation, and a collision loss term that penalizes collisions between one or more parts included in the plurality of parts after applying a predicted forward motion from the first articulation to the second articulation.” (machine-learned mesh model of an object can be learned jointly with a machine-learned camera model by minimizing a loss function that evaluates one or more aspects of the object; Pg.2, Para 17); (evaluating the loss function may include determining a first contour (eg, using one or more segmentation techniques, etc.) and a second contour (eg, based on a known location of an object within the rendered image). …. The loss function can be evaluated based at least in part on a comparison of the first profile and the second profile; Pg.3, Para 8); (evaluating the loss function may include determining first texture data (e.g., using raw pixel data and/or various feature extraction techniques) and second texture data (e.g., using known texture data from the rendered image). …. The loss function may be evaluated based at least in part on a comparison of the first texture data and the second texture data; Pg. 4, Para 1); (evaluating the loss function can include determining a first flow (eg, using one or more optical flow techniques, etc.) and a second flow (eg, based on known variations across image rendering). …. A loss function can be evaluated based at least in part on a comparison of the first stream and the second stream; Pg.3, Para 7); (the motion regularizers used in evaluating the loss function may include a temporal smoothness term, a minimum motion term, and a as rigid as possible term; Pg.3, Para 6); The loss function for determining a contour, texture and flow relates to the loss function of the consistency loss, matching loss and collision loss. The motivation for the above is to combine for an efficient and accurate generation of an articulated object with the backpropagation operations. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the one or more backpropagation operations minimize a loss function that comprises at least one of a consistency loss term that penalizes inconsistencies between corresponding points in the first articulation and the second articulation, a matching loss term that penalizes unmatching image features between pixel pairs the first articulation and the second articulation, and a collision loss term that penalizes collisions between one or more parts included in the plurality of parts after applying a predicted forward motion from the first articulation to the second articulation as taught by Sun. Regarding claim 11, Mora and Cheng fail to teach “One or more non-transitory computer-readable storage media including instructions that, when executed by at least one processor, cause the at least one processor to perform steps for generating an articulation model, the steps comprising:”. However, Sun teaches “One or more non-transitory computer-readable storage media including instructions that, when executed by at least one processor, cause the at least one processor to perform steps for generating an articulation model, the steps comprising:” (non-transitory computer readable media; Pg.1, Para 11); (stored on a storage device, loaded into memory, and executed by one or more processors; Pg.7, Para 3); (includes one or more sets of computer-executable instructions stored in a tangible computer-readable storage medium; Pg.7, Para 3); Claim 11 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 1. Therefore, claim 11 limitations are also rejected with the same rational as regarding claim 1. The motivation for the above is to a computer readable storage media that is compatible with performing operations to generate an articulated object. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the One or more non-transitory computer-readable storage media including instructions that, when executed by at least one processor, cause the at least one processor to perform steps for generating an articulation model as taught by Sun. Regarding claim 12, Cheng and Sun fail to teach all of claim 12. However, Mora teaches “The one or more non-transitory computer-readable storage media of claim 11, wherein performing one or more operations to generate the first 3D geometry comprises:” (generating a mesh of said an object or being from said sequence of images captured; 0049); “performing one or more operations to generate a first model of the object in the first articulation based on the first set of images; and” (generating a complete 3D object model from a set of images; 0005); “performing one or more operations to generate the first 3D geometry based on the first model.” (generating a realistic 3D reconstruction model for an object; 0047); Claim 12 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 2. Therefore, claim 12 limitations are also rejected with the same rational as regarding claim 2. Regarding claim 13, Mora and Cheng fail to teach all of claim 13. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 12, wherein performing one or more operations to generate the first model comprises performing one or more iterative operations to update parameters of at least one machine learning model included in the first model based on the first set of images.” (model trainer 160 that trains machine-learned model; Pg.6, Para 8); (update parameters iteratively over multiple training iterations; Pg.6, Para 8); Claim 13 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 3. Therefore, claim 13 limitations are also rejected with the same rational as regarding claim 3. Regarding claim 14, Mora and Cheng fail to teach all of claim 14. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 12, wherein the first model comprises a first machine learning model associated with geometry of the object and a second machine learning model associated with an appearance of the object.” (each application can communicate with the central intelligence layer (and the models stored therein); Pg.7, Para 10); (The central intelligence layer includes many machine learning models. For example, as shown in FIG. 1C, a corresponding machine learning model may be provided for each application; Pg.7, Para 11); Claim 14 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 4. Therefore, claim 14 limitations are also rejected with the same rational as regarding claim 4. Regarding claim 15, Mora and Cheng fail to teach all of claim 15. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 11, wherein the articulation model comprises a segmentation model that segments a plurality of parts of the object and a set of motion parameters defining one or more motions of each part included in the plurality of parts.” (an analysis-by-synthesis strategy and forward render contour, optical flow, and/or color images that are compared to video observations to adjust the model's camera, shape, and/or motion parameters. The proposed technique is able to accurately reconstruct rigid and non-rigid 3D shapes; Pg.5, Para 1); Claim 15 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 6. Therefore, claim 15 limitations are also rejected with the same rational as regarding claim 6. Regarding claim 16, Mora and Cheng fail to teach all of claim 16. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 11, wherein performing one or more operations to generate the articulation model comprises performing one or more backpropagation operations to update the set of motion parameters and one or more parameters of the segmentation model.” (can be backpropagated through the model to update one or more parameters of the model; Pg.6, Para 9); (performing backpropagation of errors; Pg.6, Para 10); Claim 16 is directed to a non-transitory computer-readable storage media and its limitations are similar in scope and functions performed by the computer-implemented method of claim 7. Therefore, claim 16 limitations are also rejected with the same rational as regarding claim 7. Regarding claim 17, Mora and Cheng fail to teach all of claim 17. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 16, wherein the segmentation model comprises a probability distribution associated with the plurality of parts.” (a mixture of Gaussian models can also guarantee smoothness. The number of shape and motion parameters can now be expressed as: It can scale linearly with the number of frames and bones; Pg.9 Para 10); Gaussian models fall under the umbrella of probability distribution and the number of shape and motion parameters relates to the plurality of parts. The motivation for the above is to have an accurate and efficient usage of probability distribution for smoother generation of the parts of the articulated object. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the segmentation model comprises a probability distribution associated with the plurality of parts as taught by Sun. Regarding claim 18, Mora and Cheng fail to teach all of claim 18. However, Sun teaches “The one or more non-transitory computer-readable storage media of claim 11, wherein the first set of images includes a plurality of RGB-D (red, green, blue, depth) images of the object in the first articulation captured from different viewpoints.” (imagery (eg, an RGB input image); Pg.1, Para 1); (different views of a 3D object can be created; Pg.4, Para 5); The motivation for the above is to generate an accurate articulated object based on the RGB-D images inputted. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the first set of images includes a plurality of RGB-D (red, green, blue, depth) images of the object in the first articulation captured from different viewpoints as taught by Sun. Regarding claim 20, Mora and Cheng fail to teach “A system, comprising: one or more memories storing instructions; and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to:”. However, Sun teaches “A system, comprising: one or more memories storing instructions; and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to:” (Server computing system 130 includes one or more processors 132 and memory 134; Pg.6, Para 4); (Memory 134 may store data 136 and instructions 138 executed by processor 132; Pg.6, Para 4); Claim 20 is directed to a system and its limitations are similar in scope and functions performed by the computer-implemented method of claim 1. Therefore, claim 20 limitations are also rejected with the same rational as regarding claim 1. Claim(s) 9, 10, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable under Mora in view of Cheng and in further view of MA et al (Ma, Liqian, et al. "Sim2Real^ 2: Actively Building Explicit Physics Model for Precise Articulated Object Manipulation." 02/21/2023, arXiv preprint arXiv:2302.10693 (2023). (Year: 2023)). Regarding claim 9, Mora and Cheng fail to teach all of claim 9. However, Ma teaches “The computer-implemented method of claim 1, further comprising performing one or more operations to simulate the articulation model in an extended reality (XR) environment.” (method to construct the explicit physics model of the single object instance in the simulation; Col Intro, Para 2); The objects in the simulation are considered an object in an XR environment since simulations can fall in the categories of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR). The motivation for the above is to have a more compatible environment to perform the operations to generate an articulated object. Mora, Cheng and Ma are analogous art as they are related to image processing for 3D articulation model generation Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the performing of one or more operations to simulate the articulation model in an extended reality (XR) environment as taught by Sun. Regarding claim 10, Mora and Cheng fail to teach all of claim 10. However, Ma teaches “The computer-implemented method of claim 1, further comprising performing one or more operations to control a robot based on the articulation model.” (control the robot to actively interact with the object with a one-step action; Abstract); The motivation for the above is to have an integral part of the generation of an articulated object. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the comprising performing of one or more operations to control a robot based on the articulation model as taught by Sun. Regarding claim 19, Mora and Cheng fail to teach all of claim 19. However, Ma teaches “The one or more non-transitory computer-readable storage media of claim 11, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to perform the step of performing one or more operations to at least one of simulate the articulation model in an extended reality (XR) environment or control a robot based on the articulation model.” (method to construct the explicit physics model of the single object instance in the simulation; Col Intro, Para 2); (control the robot to actively interact with the object with a one-step action; Abstract); The motivation for the above is to a combination of the simulation environment of the articulated object work with e robot for an accurate generation of an articulated object in and XR environment. Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Mora modified by Cheng by modifying the instructions, when executed by the at least one processor, further cause the at least one processor to perform the step of performing one or more operations to at least one of simulate the articulation model in an extended reality (XR) environment or control a robot based on the articulation model as taught by Sun. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ES-2695157-T3 (Germann) – Discloses a computer-implemented method for rendering a virtual image, given an articulated object model, in which the articulated object model is a computer-based 3D model of a real-world. Liu, L., Xu, W., Fu, H., Qian, S., Yu, Q., Han, Y., & Lu, C., "Akb-48: A real-world articulated object knowledge base.", 2022, In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14809-14818). (Year: 2022) – Discloses pose estimation, object reconstruction and manipulation. Akman, A., Sahillioğlu, Y., & Sezgin, T. M.,"Deep generation of 3D articulated models and animations from 2D stick figures.", 2022, Computers & Graphics, 109, 65-74. (Year: 2022) – Discloses the first method to generate a 3D human model from a single sketched stick figure. That the model learns to generate compatible 3D models and animations with 2D sketches. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIGITER D PROTAZI whose telephone number is (571)272-7995. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A Broome can be reached at 5712722931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.D.P./Examiner, Art Unit 2612 /Said Broome/Supervisory Patent Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month