Prosecution Insights
Last updated: April 19, 2026
Application No. 17/748,739

OBJECT ANIMATION USING NEURAL NETWORKS

Non-Final OA §101§103
Filed
May 19, 2022
Examiner
LHYMN, SARAH
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
65%
Grant Probability
Favorable
5-6
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
357 granted / 546 resolved
+3.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
30 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment / Arguments 112(b) rejections. Applicant’s amendment overcomes the 112(b) rejections. 112(a) rejections. Applicant’s amendment overcomes the 112(b) rejections. 103 rejections. Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Regarding claims 17-24, the instant claims recite the limitation of "a machine-readable medium". The claims are directed toward an article of manufacture and normally would be statutory. However, the specification does not define the machine-readable medium such to exclude transitory signals and propagating waves. Applicant is advised to amend the respective claims to exclude such transitory embodiments by adding “non-transitory” to the machine-readable medium which would render the claims statutory. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9-14, 17-22 and 25-29 are rejected under 35 U.S.C. 103 as being unpatentable over Hussen Abdelaziz (U.S. Patent App. Pub. No. 2021/0248804) in view of Ahuja, C., & Morency, L. P. (2019, September). Language2pose: Natural language grounded pose forecasting. In 2019 International conference on 3D vision (3DV) (pp. 719-728). IEEE (“Ahuja”), and further in view of Zhang (U.S. Patent App. Pub. No. 2020/0105381 A1). Note: see 04/22/2025 mail date in prosecution history for PTO-892 and NPL reference, which still apply. Regarding claim 1: Hussen Abdelaziz teaches: a processor, comprising: one or more circuits (para. 42, a processor that can have one or more circuits) to use one or more neural networks to control motion of one or more animated objects (see e.g. claim 21, which teaches that the processor can use a neural network to generate parameters to control the pose of an avatar, whereby an animation is a sequence of poses. This corresponds to a teaching of one or more neural networks to control motion of one or more animated objects (i.e. an avatar as an animated object)); See also claim 15) based, at least in part, on one or more natural language input (see para. 32, which teaches that the system can receive and interpret natural language input in spoken and/or textual form, in combination with para. 33, which teaches that natural language inputs include natural language commands, requests, statements, narratives and/or inquiry. Likewise, see Fig. 7A: 732, and para. 231, which describe a natural language processing module). Re: the natural language input being motion control instructions that include at least a description of a desired task and a desired manner of motion to direct movement of the one or more animated objects to perform the desired task in the desired manner, consider two alternate obviousness rationales for this claim language. First rationale: Hussen Abdelaziz teaches that the natural language (“NL”) input can include commands and requests, and the instant reference can use these inputs to animate an avatar (see claim 15 and above mapping). Hussein Abdelaziz isn’t particularly limited as to what types of commands, requests or statements can be input by a user. Accordingly, commands that include motion control instructions describing a desired task and manner of motion of an avatar (animated object) to direct movement is taught/suggested as one exemplary embodiment of user NL input, as the instant reference is related to avatar animation, and techniques to animate avatars using textual (NL) input (para. 2). Hussen Abdelaziz further teaches that its trained neural network can to parse animation parameters based on text/NL input (see para. 271); and teaches, in para. 279, determining parameters for animating movements of the avatar). See also Figs. 10A and 10C. Accordingly, the above features would have been obvious over Hussen Abdelaziz. Second rationale (alternative rationale). Alternatively, consider Ahuja. Ahuja also teaches a neural architecture, one that can receive natural language input motion control instructions that include at least a description of a desired task and a desired manner of motion. See Fig 1 (input: natural language description). Here, the example NL input is “A person walks in a circle”. Task: circularly spatial motion; manner of motion: walk. Other examples of natural language inputs to direct motion of one or more animated objects as claimed, can be found illustrated in Fig. 4. See Sections 3, 4 and 6 for a more detailed explanation. Accordingly, modifying Hussen Abdelaziz, in view of itself and/or Ahuja, to have obtained the above, such that the NL inputs that both references teach are ones that include motion control instructions as claimed, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Re: update the one or more neural networks based, at least in part, on: a task reward function indicative of compliance of the directed movement with the desired task, and a separate skill reward function of compliance of the directed movement with the desired manner of motion, consider the following. In analogous art, Zhang teaches, for neural networks (Zhang, Fig. 3, an illustration of a neural network), that it is known to use reward functions, as part of training, to “optimize a metric” (see para. 24). See also para. 41, “the machine learning algorithm 204 is configured, based on such training data, to train a model (e.g. neural network) to optimize a reward function (or performance measure) which places a positive training reward on, for example… [description of functions performed by neural networks of Zhang]”. For additional teaching, see paras. 69, 70, 71, 81-83, 86, 90-92 and “Reward Function” section, beginning at par. 102. Modifying the applied references, in view of Zhang, such to include the above reward functions as tied to desired outcomes of the neural network, per Zhang, and for the network described in Hussen Abdelaziz and/or Ahuja, none of the references particular restrictive in the applicability of neural network training principles, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 2: Ahuja teaches: the processor of claim 1, wherein the one or more animated tasks are associated with one or more skills indicated in the one or more natural language inputs to perform the desired motion for the one or more tasks animated tasks in a desired manner (see above mapping to claim 1. Skills could be walking skills, directional skills, movement skills, athletic skills. See Fig. 4 of Ahuja). It would have been obvious for one of ordinary skill in the art as of the effective filing date of Applicant’s claims to have further modified the applied references, in view of same, to have obtained the above, motivated to achieve enhanced control over graphic commands. Regarding claim 3: Ahuja teaches: the processor of claim 1, wherein the one or more neural networks comprise an encoder trained to encode the one or more natural language inputs to a point in latent space that corresponds to a description of a skill included in the one or more natural language inputs to perform the desired motion in a desired manner (see Sections 3-4 and Fig. 2. Figure 2 shows two autoencoders, one to encode the NL (natural language) input, which goes to the joint embedding space (point in latent space). Example: Section 3: “As an example, consider a natural language sentence which describes a human’s motion: ”A person walks in a circle”. The goal of this cross-modal language-to-pose translation task is to generate an animation representing the sentence; i.e. an animation that shows a person following a trajectory of a circle with a walking motion (see figure 1)….”. Here, description of skill to perform desired motion (walking) in A desired manner (circular trajectory). Example: Section 4.1: “To learn a joint embedding space of language and pose, the sentence X1:N and pose Y1:T are first mapped to a latent representation using a sentence encoder pe(X1:N;e) and a pose encoder qe(Y1:T ;e) respectively.”. The remainder of Sections 3-4 describe equations and variables to achieve the encoding to latent space. It would have been obvious for one of ordinary skill in the art as of the effective filing date of Applicant’s claims to have further modified the applied references, in view of same, to have obtained the above, motivated to make use of known machine learning techniques to train and perform desired outcomes. Regarding claim 4: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more neural networks comprise an encoder trained using a dataset of examples of motion, the examples labelled with descriptions of skills depicted in the examples, the description of skills describing manners of the motion in the examples, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Both Hussen Abdelaziz and Ahuja teach a neural network having an encoder (Hussen Abdelaziz, para. 271) (Ahuja, Fig. 2). Re: trained as recited above, see Hussen Abdelaziz, para. 281, which teaches training using text data, a speech data set, and a reference set of parameters representing one or more movements of the avatar. See also paras. 282-84, 288. Abdelaziz also teaches supervised training (see e.g. para. 325) which corresponds to labelled training data. Per the prior art, example labels could be walking (see mapping to claim 1), grabbing, turning, etc. The prior art included each element recited in claim 4, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 5: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more neural networks are trained based, at least in part, on one or more examples of motion associated with a point in latent space that corresponds to an encoding of a natural language description of the motion, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See Ahuja, Fig. 2: motion examples mapped to a point in latent space (i.e. from encoded natural language) can be used for training models. See also Section 4 for more detailed explanation. Modifying the applied references, such that the neural networks are trained (see also mapping to claim 4 re: Hussen Abdelaziz and training) using motion examples associated with latent space corresponding to an encoding, per Ahuja, is all of taught and suggested by the prior art and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 6: see claim 29. Claim 6 is encompassed by, and somewhat broader than, claim 29; the same rationale for rejection applies. Regarding claim 9: see also claim 1. Hussen Abdelaziz teaches: a system, comprising: one or more processors (claim 21 device comprising one or more processors). The functions the system performs corresponds to those of the processor of claim 1; the same rationale rejection. Regarding claim 10: see claim 2. These claims re similar; the same rationale for rejection applies. Regarding claim 11: see claim 3. These claims re similar; the same rationale for rejection applies, in some aspects claim 11 is broader than claim 3; and the example in claim 3 corresponds to the depiction of claim 11. Regarding claim 12: see claim 4. These claims re similar; the same rationale for rejection applies. Regarding claim 13: see claim 5. These claims re similar; the same rationale for rejection applies. Regarding claim 14: see claim 6 and/or 29. These claims are similar; the same rationale for rejection applies. Regarding claim 17: see also claim 1. Hussen Abdelaziz teaches: a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least (claim 1). The instructions of claim 17 corresponds to the functions of claim 1; the same rationale for rejection applies. Regarding claim 18: see claim 2. These claims re similar; the same rationale for rejection applies. Regarding claim 19: see claim 3. These claims re similar; the same rationale for rejection applies. Regarding claim 20: see claim 4. These claims re similar; the same rationale for rejection applies. Regarding claim 21: see claim 5. These claims re similar; the same rationale for rejection applies. Regarding claim 22: see claim 6 and/or 29. These claims re similar; the same rationale for rejection applies. Regarding claim 25: see also claim 1. The method of claim 25 corresponds to the functions of claim 1; the same rationale for rejection applies. Regarding claim 26: see claim 2. These claims re similar; the same rationale for rejection applies. Regarding claim 27: see claim 3. These claims re similar; the same rationale for rejection applies. Regarding claim 28: Ahuja teaches: the method of claim 25, wherein training the one or more neural networks comprises comparing generated motion of the one or more animated objects to one or more examples of motion associated with a point in latent space that corresponds to an encoding of the one or more natural language inputs (see Sections 3-4 and Fig. 2. Figure 2 shows two autoencoders, one to encode the NL (natural language) input, which goes to the joint embedding space (point in latent space). Example: Section 3: “As an example, consider a natural language sentence which describes a human’s motion: ”A person walks in a circle”. The goal of this cross-modal language-to-pose translation task is to generate an animation representing the sentence; i.e. an animation that shows a person following a trajectory of a circle with a walking motion (see figure 1)….”. Here, description of skill to perform desired motion (walking) in A desired manner (circular trajectory). Example: Section 4.1: “To learn a joint embedding space of language and pose, the sentence X1:N and pose Y1:T are first mapped to a latent representation using a sentence encoder pe(X1:N;e) and a pose encoder qe(Y1:T ;e) respectively.”. The remainder of Sections 3-4 describe equations and variables to achieve the encoding to latent space. It would have been obvious for one of ordinary skill in the art as of the effective filing date of Applicant’s claims to have further modified the applied references, in view of same, to have obtained the above, motivated to make use of known machine learning techniques to train and perform desired outcomes. Regarding claim 29: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 25, wherein the training comprises using one or more reward functions indicative of the animated object using a skill described in the one or more natural language inputs and one or more reward functions indicative of the animated object completing the one or more animated tasks described in the one or more natural language inputs, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See above mapping to claim 1 and Zhang, re: reward functions, that equally applies here. Modifying the applied references, in view of Zhang, such to include reward functions as tied to desired outcomes of the neural network, per Zhang, and for the network described in Hussen Abdelaziz and/or Ahuja, none of the references particular restrictive in the applicability of neural network training principles, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. The prior art included each element recited in claim 29, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Claims 7, 15, 23 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Hussen Abdelaziz in view of Ahuja, and further in view of Roche (U.S. Patent No. 10,586,369). Regarding claim 7: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more natural language inputs comprise instructions to direct an interaction of the one or more animated objects with another object in a simulated environment, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See Roche, e.g. claim 3 (“animation of the first object associated with the first animation that includes the avatar gesturing toward the location associated with the first object”), and the illustration of Fig. 9B: 928 avatar, interacting with objects 930, 932, in a virtual environment 922. Similar to Hussen Abdelaziz and Ahuja, Roche also teaches natural language processing and speech recognition to drive avatar animation (C10, last two paragraphs). Modifying the applied references, in view of Roche, such to have included avatars (all applied references teach avatar animation) animated to interact with another object (i.e. a virtual object, per Roche) in a simulated environment (VR/AR, see Roche, C2), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 15: see claim 7. These claims re similar; the same rationale for rejection applies. Regarding claim 23: see claim 7. These claims re similar; the same rationale for rejection applies. Regarding claim 30: see claim 7. These claims re similar; the same rationale for rejection applies. Claims 8, 16, 24 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Hussen Abdelaziz in view of Ahuja, and further in view of Manolakos (U.S. Patent App. Pub. No. 2023/0308920 A1). Regarding claim 8: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more neural networks comprise a classifier to select a neural network from a plurality of neural networks based at least in part on a task category of the one or more animated tasks, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). In analogous art, Manolakos, relevant to techniques and apparatuses for supporting machine learning components (see para. 2), teaches that it is known to have a classifier configured to select an autoencoder (an autoencoder is a type of neural network) (see e.g. paras. 29-30). One example that Manolakos gives is in Figure 4, in which the one or more classifiers 408 can select one or more autoencoders 410 (neural network) based on (in this example) communication parameters. Modifying the applied references, such to include the teachings of Manolakos with respect to classifiers to select a neural network, in the context of animating avatars and NL inputs (see mapping to claim 1), such that the input is based on a task category (i.e. walking, see mapping to claim 1), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art. Stated differently, modification of the prior art such that the teachings of Manolakos, as it relates to machine learning components such as autoencoders that include one or more neural networks, and the use of classifiers to select an autoencoder from a set of autoencoders to use for a specific task (in Manolakos, the tasks are related to wireless communications), are applied to Hussen Abdelaziz, which uses neural networks for the task of animating avatars, and to categorize the tasks of Hussen Abdelaziz into, i.e. categories related to applications (see e.g. Hussen Abdelaziz, para 83) that the avatar is capable of executing, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art. The prior art included each element recited in claim 8, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 16: see claim 8. These claims re similar; the same rationale for rejection applies. Regarding claim 24: see claim 8. These claims re similar; the same rationale for rejection applies. Regarding claim 31: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 25, further comprising: training a classifier to select the one or more a neural network from a plurality of a neural networks based at least in part on a task category of the one or more animated tasks, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See e.g. Manolakos, para. 34, 36 (training classifiers) and/or paras. 83-84, which also describes/teaches training classifiers/classifier networks. Modifying the applied refs, in view of same, such to include training the classifier, per Manolakos, to select the neural network, also per Manolakos, based on task categories (see mapping to claim 8), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure, relevant to machine learning. * * * * * Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sarah Lhymn Primary Examiner Art Unit 2613 /Sarah Lhymn/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

May 19, 2022
Application Filed
Oct 05, 2023
Non-Final Rejection — §101, §103
Nov 03, 2023
Interview Requested
Nov 09, 2023
Applicant Interview (Telephonic)
Nov 09, 2023
Examiner Interview Summary
Jan 16, 2024
Response Filed
Apr 10, 2024
Final Rejection — §101, §103
Jun 26, 2024
Applicant Interview (Telephonic)
Jun 26, 2024
Examiner Interview Summary
Jul 22, 2024
Response after Non-Final Action
Jul 25, 2024
Response after Non-Final Action
Aug 09, 2024
Request for Continued Examination
Aug 12, 2024
Response after Non-Final Action
Sep 23, 2024
Non-Final Rejection — §101, §103
Mar 24, 2025
Response Filed
Apr 17, 2025
Final Rejection — §101, §103
Sep 22, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Response after Non-Final Action
Jan 26, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602882
AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602764
METHODS OF ARTIFICIAL INTELLIGENCE-ASSISTED INFRASTRUCTURE ASSESSMENT USING MIXED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602746
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
2y 5m to grant Granted Apr 14, 2026
Patent 12585888
AUTOMATICALLY GENERATING DESCRIPTIONS OF AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12586163
INTERACTIVELY REFINING A DIGITAL IMAGE DEPTH MAP FOR NON DESTRUCTIVE SYNTHETIC LENS BLUR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
81%
With Interview (+15.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month