Prosecution Insights
Last updated: April 19, 2026
Application No. 18/731,055

METHODS AND SYSTEMS FOR VIRTUAL ASSET RENDERING BASED ON ARTICULATION DATA

Non-Final OA §102§103§112
Filed
May 31, 2024
Examiner
USSERY, CAIDEN ALEXANDER
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
50.0%
+10.0% vs TC avg
§102
44.4%
+4.4% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because of the typo “one or more outputs of the IA model are obtained…”. If IA is a new term please add the terminology, if it is meant to state AI (Artificial Intelligence) please make the corrections. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-4, 11-13, & 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims state “… each of the at least one of the plurality of stationary states or the plurality of dynamic states…” which is unclear as the statement could be interpreted in more than one way. One or more of the pluralities or one or more of the states themselves. The examiner will be using the broadest reasonable interpretation, in this case, one or more of the stationary states or dynamic states, selected from the respective pluralities. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 5-11, & 14-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ramsey Jones Et. Al. (International Pat. Pub. App. WO-2021217088-A1, herein after “Jones”). Regarding claims 1, 10, & 18, Jones teaches [a] method comprising: identifying characteristic data indicating one or more physical characteristics of an object “At block 320, a template three-dimensional (3D) mesh associated with an object is obtained. In some implementations, a category of the object may be determined based on an image analysis of the provided 2D image” (¶ [0098]) where determining an image category through image matching compares identified characteristic data; providing the characteristic data as an input to an artificial intelligence (AI) model “At block 310, a two-dimensional (2D) image of an object is provided as input to a trained machine learning model” (¶ [0097]) where the 2D image is to be compared with other 2D images to generate a 3D model, the 2D image features are interpreted as characteristic data; obtaining one or more outputs of the AI model, “In some aspects, the described systems and methods provide for one or more models to generate a three-dimensional reconstruction from a two-dimensional image of an object.” (¶ [0047]) where one or more models can be generated, wherein the one or more outputs comprise articulation data indicating at least one of a stationary state or a dynamic state of the object based on the physical characteristics of the object “In some embodiments, the variation in the distinct views and scenes for different versions of scenarios may be data-driven. Distinct views and scenes may be generated for different versions of scenarios using the logic associated with potential animals, plants, and terrain features in ways that adhere to human expectations.” (¶ [0035]) where using the logic associated with the human expectations of an identified object is understood to be articulating the data for model generation; and updating, based on the articulation data, a model file associated with a virtual asset corresponding to the object, wherein the model file, when executed, creates a rendering of the virtual asset in a virtual scene associated with a three-dimensional (3D) graphics platform according to at least one of the stationary state or the dynamic state of the object “The rig may be utilized to animate the character and to simulate motion and action by the character. The 3D model may be represented as a data structure, and one or more parameters of the data structure may be modified to change various properties of the character …” (¶ [0086]) where the model, or virtual asset, is represented as a data structure, based on the information provided to and assessed by the artificial intelligence “FIG. 2 is a diagram of an example system architecture to generate 3D meshes from 2D images in an online gaming context” (¶ [0061]) where stored data can be pulled and executed by a rendering engine. PNG media_image1.png 858 721 media_image1.png Greyscale In regards to claim 10, claim 1 is substantially similar to claim 10, hence the rejection analysis for claim 1 is also applied to claim 10. Jones teaches the additional limitations “[a] system comprising: a set of one or more processing devices to perform operations …” (¶ [0174]) In regards to claim 18, claim 1 is substantially similar to claim 18, hence the rejection analysis for claim 1 is also applied to claim 18. Jones teaches the additional limitations “[a] processor comprising a set of one or more processing units to: …” (¶ [0174]) Regarding claims 2, 11, & 19, Jones teaches [t]he method of claim 1, wherein the articulation data indicates at least one of a plurality of stationary states or a plurality of dynamic states of the object based on the physical characteristics of the object, wherein each of the at least one of the plurality of stationary states or the plurality of dynamic states corresponds to a respective physical behavior of the object in an environment corresponding to the virtual scene “In some embodiments, the variation in the distinct views and scenes for different versions of scenarios may be data-driven. Distinct views and scenes may be generated for different versions of scenarios using the logic associated with potential animals, plants, and terrain features in ways that adhere to human expectations. For example, in the generated scenario, animals that should swarm, swarm, and animals that should fly, fly, while animals that should mingle and meander, navigate the terrain as they would in real life” (¶ [0035]) where the data used to create a scene is articulated, or used to generate animations corresponding with the logic or human expectations of the identified object. In regards to claim 11, claim 2 is substantially similar to claim 11, hence the rejection analysis for claim 2 is also applied to claim 11. Jones teaches the additional limitations “[t]he system of claim 10 …” (¶ [0174]) In regards to claim 19, claim 2 is substantially similar to claim 19, hence the rejection analysis for claim 2 is also applied to claim 19. Jones teaches the additional limitations “[t]he processor of claim 18 …” (¶ [0174]) Regarding claim 5 & 14, Jones teaches [t]he method of claim 1, further comprising: receiving a request for access to the virtual asset in a rendering of the virtual scene at a client device connected to the 3D graphics platform and “The template 3D mesh may be selected from a set of previously generated and stored, template meshes, and may include both user generated and automatically generated meshes. In some implementations, multiple template meshes may be presented to the user and a template mesh selected based on input received from a user. In some implementations, a plurality of template meshes may be selected for the performance of methods 300” (¶ [0099]) where the user may request a previously generated and stored mesh/model; providing the client device with access to at least a portion of the model file in accordance with the request “Some implementations can have one or more blocks of method 300 performed by one or more other devices (e.g., other client devices or server devices) that can send results or data to the first device.” (¶ [0096]) where the user may request for a specific 3D mesh, or virtual asset, and the requesting device will be provided with access from another client device or the server based on the request. In regards to claim 14, claim 5 is substantially similar to claim 14, hence the rejection analysis for claim 5 is also applied to claim 14. Jones teaches the additional limitations “[t]he system of claim 10 …” (¶ [0174]) Regarding claim 6 & 15, Jones teaches [t]he method of claim 5, further comprising: determining one or more simulated physical conditions associated with the rendering of the virtual scene at the client device “Distinct views and scenes may be generated for different versions of scenarios using the logic associated with potential animals, plants, and terrain features in ways that adhere to human expectations” (¶ [0035]); identifying, from the model file, one or more instructions pertaining to at least one of a target stationary state or a target dynamic state of the virtual asset in the virtual scene based on the determined one or more simulated physical conditions “One general aspect includes a non-transitory computer-readable medium may include instructions that. The non - transitory computer - readable medium also includes providing a two-dimensional (2D) image of the object as input to the trained machine learning model; obtaining a template three-dimensional (3D) mesh; and generating, using the trained machine learning model and based on the 2D image and the template 3D mesh, a 3D mesh for the object, where the 3D mesh for the object is usable to map a texture or to generate a 3D animation of the object” (¶ [0010]) where the instructions are the provided model generation request, or 2D image, and the target stationary or dynamic state are the template 3D mesh, which can be made dynamic via an applied animation template. “At block 320, a template three-dimensional (3D) mesh associated with an object is obtained. In some implementations, a category of the object may be determined based on an image analysis of the provided 2D image. As described earlier, objects and/or characters in virtual environments may be implemented as a 3D model and may include a surface representation used to draw the object/character (also known as a skin or mesh) and a hierarchical set of interconnected bones (also known as a skeleton or rig). The rig may be utilized to animate the character and to simulate motion and action by the object” (¶ [0098]) and providing an indication of the identified one or more instructions to the client device with the model file for presentation via a user interface (UI) of the client device “One or more methods described herein can be run in a standalone program that can be run on any type of computing device, a program run on a web browser, a mobile application (“app”) run on a mobile computing device (e.g., cell phone, smart phone, tablet computer, wearable device (wristwatch, armband, jewelry, headwear, goggles, glasses, etc.), laptop computer, etc.). In one example, a client/server architecture can be used, e.g., a mobile computing device (as a client device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display)” (¶ [0172]). In regards to claim 15, claim 6 is substantially similar to claim 15, hence the rejection analysis for claim 6 is also applied to claim 15. Jones teaches the additional limitations “[t]he system of claim 10 …” (¶ [0174]) Regarding claim 7 & 16, Jones teaches [t]he method of claim 1, wherein the AI model is a large language model “At block 330, a 3D mesh for the object is generated using the trained machine learning model based on the 2D image and the template 3D mesh” (¶ [0100]) it is noted that the learning model may be a different artificial intelligence. In regards to claim 16, claim 7 is substantially similar to claim 16, hence the rejection analysis for claim 7 is also applied to claim 16. Jones teaches the additional limitations “[t]he system of claim 10 …” (¶ [0174]) Regarding claim 8, Jones teaches [t]he method of claim 1, wherein the characteristic data comprises at least one of (¶ [0007]): image data comprising one or more images depicting the one or more physical characteristics of the object, derivative data obtained based on an output of one or more generative AI models. “prior to obtaining the template 3D mesh, the category of the object may be determined based on the 2D image using image matching. In some implementations, image segmentation may be performed to determine a category of the object. In some implementations, the category of the object may be specified based on user input” (¶ [0104]) additionally, “one or more textures from the 2D image may be mapped to the 3D mesh of the object, depending on texture information in the 2D image. In some implementations, a semantic segmentation may be performed, and texture information from different portions of the 2D image may be mapped to corresponding portions of the 3D mesh of the object” (¶ [0105]) the image data is derived from multiple characteristics, determined through several methods (¶ [0102] – [0109]) e.g. texture information. Regarding claim 9 & 17, Jones teaches [t]he method of claim 1, wherein the 3D graphics platform is associated with a computing system comprised in at least one of (¶ [0007]): a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for three-dimensional (3D) assets; a system for performing deep learning operations; a system for performing operations using a large language model (LLM); a system for performing synthetic data generation; a system for generating synthetic data; a system for presenting at least one of virtual reality content, augmented reality content, or mixed reality content; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. “In some implementations, online assessment platform 102 or client device 110 may include the assessment engine 104 or virtual assessment 112. In some implementations, assessment engine 104 may be used for the development or execution of assessments 105. For example, assessment engine 104 may include a rendering engine (“renderer”) for 2D, 3D, VR, or AR graphics, a physics engine, a collision detection engine (and collision response), sound engine, scripting functionality, haptics engine, artificial intelligence engine, networking functionality, streaming functionality, memory management functionality, threading functionality, scene graph functionality, or video support for cinematics, among other features” (¶ [0056]) it is also noted that this may be implemented in other methods/engines In regards to claim 17, claim 9 is substantially similar to claim 17, hence the rejection analysis for claim 9 is also applied to claim 17. Jones teaches the additional limitations “[t]he system of claim 10 …” (¶ [0174]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-4, 12-13, & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jones in view of Daniel Shapiro Et. Al. (U.S. Pat. Pub. No. US-20240176321-A1, herein after “Shapiro”). Regarding claim 3, 12, & 20, Jones teaches [t]he method of claim 2, wherein updating the model file associated with the virtual asset comprises “The 3D model may be represented as a data structure, and one or more parameters of the data structure may be modified to change various properties of the character” (Jones, ¶ [0086): obtaining, for each of the at least one of the plurality of stationary states or the plurality of dynamic states, a set of instructions associated with rendering the virtual asset in the virtual scene according to at least one of a respective stationary state or a respective dynamic state “The components of the assessment engine 104 may generate commands that help compute and render the assessment (e.g., rendering commands, collision commands, physics commands, etc.)” (Jones, ¶ [0056]) where the commands generated by the assessment engine to render the model are instructions associated with rendering the virtual asset based on the user selection (Jones, ¶ [0058]); Jones does not explicitly teach updating the model file to include the set of instructions for each of the at least one of the plurality of stationary states or the plurality of dynamic states. Shapiro teaches updating the model file to include the set of instructions for each of the at least one of the plurality of stationary states or the plurality of dynamic states “In some embodiments, the CNC machine executes an implementation file to create a rendered fabrication result, where the implementation file includes the machine-generated rendering instructions” (Shapiro, ¶ [0056]), additionally, “Transforming the SVG image into rendering instructions, and in some instances, additionally packaging the rendering instructions into an implementation file can be performed by the same AI model, a different AI model, or perhaps via other computing methods” (Shapiro, ¶ [0120]). In regards to claim 12, claim 3 is substantially similar to claim 12, hence the rejection analysis for claim 3 is also applied to claim 12. Jones teaches the additional limitations “[t]he system of claim 10 …” (Jones, ¶ [0174]) In regards to claim 20, claim 3 is substantially similar to claim 20, hence the rejection analysis for claim 3 is also applied to claim 20. Jones teaches the additional limitations “[t]he processor of claim 18 …” (Jones, ¶ [0174]) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of virtual asset generation using AI, and storing it to later be rendered by Jones with the method of including the rendering instructions taught by Shapiro to include all relevant data in one file. The suggestion/motivation to do so would have been to improve the time required to load a 3D asset. Regarding claim 4 & 13, Jones teaches in view of Shapiro teach [t]he method of claim 3, wherein the set of instructions for each of the at least one of the plurality of stationary states or the plurality of dynamic states is included in the one or more outputs of the AI model, and wherein obtaining the set of instructions for each of the at least one of the plurality of stationary states or the plurality of dynamic states comprises “Transforming the SVG image into rendering instructions, and in some instances, additionally packaging the rendering instructions into an implementation file can be performed by the same AI model, a different AI model, or perhaps via other computing methods” (Shapiro, ¶ [0120]).: extracting the set of instructions for each of the at least one of the plurality of stationary states or the plurality of dynamic states from the one or more outputs of the AI model “… a client/server architecture can be used, e.g., a mobile computing device (as a client device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display)” (Jones, ¶ [0172]) where a client device receives or extracts the data or image information from a server. In regards to claim 13, claim 4 is substantially similar to claim 13, hence the rejection analysis for claim 4 is also applied to claim 13. Jones teaches the additional limitations “[t]he system of claim 10 …” (Jones, ¶ [0174]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAIDEN ALEXANDER USSERY whose telephone number is (571)272-1192. The examiner can normally be reached Monday - Friday* 7:30AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.A.U./Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month