Prosecution Insights
Last updated: April 19, 2026
Application No. 18/769,012

POSE-AWARE NEURAL INVERSE KINEMATICS

Non-Final OA §102
Filed
Jul 10, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
ETH ZÜRICH
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 s/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Oreshkin et al (US 20240054671 A1) Regarding claim 1, Oreshkin discloses a computer-implemented method for generating a pose for a virtual character ([0022] systems and methods described herein provide a flexible, learned IK solver (the SMPL-IK system described below) applicable to a wide variety of human morphologies,), comprising: determining a set of joint representations corresponding to a set of joints in the virtual character based on (i) a base pose for the virtual character and (ii) a set of constraints associated with one or more joints included in the set of joints ([0028] a skeleton may include a hierarchical set of joints and may also include constraints on the joints (e.g., length of bones between joints, angular constraints, and more), which may provide a basic structure for the skeleton along with body shape value); generating, via execution of a first neural network, a set of updated joint states for the set of joints based on the set of joint representations ([0032] look-at effector is generic in that it allows a model of a neural network architecture within the ML pose prediction system); and generating, based on the set of updated joint states, an output pose that includes (i) a first set of joint positions for the set of joints and (ii) a first set of joint orientations for the set of joints ([0028] effectors do not define a pose of a character, they provide constraints for a variable number of joints that are used to satisfy a final pose (e.g., at the output of the SMPL-IK pose prediction system 100)). Regarding claim 2, Oreshkin discloses further comprising training the first neural network using (i) a first loss that is computed between the first set of joint positions and a second set of joint positions included in the base pose and (ii) a second loss that is computed between the first set of joint orientations and a second set of joint orientations included in the base pose ([0062] Individual loss terms may be combined additively (e.g., with loss weight factors for each) into a total loss term). Regarding claim 3, Oreshkin discloses further comprising training the first neural network based on one or more additional losses associated with the set of constraints ([0063] the L2 loss may be used to supervise output of the GPD module 162 (e.g., predicted joint positions 164) by directly driving a learning process of GPD. In accordance with another embodiment, the L2 loss may be used to supervise the position output of the forward kinematics pass 170 by indirectly driving a training of the IKD module 168, wherein the IKD module 168 learns to produce local rotation angles that result in joint position predictions with small L2 loss after IKD outputs are subjected to the forward kinematics pass) Regarding claim 4, Oreshkin discloses wherein the first loss is further computed based on a first set of control parameters associated with preservation of the second set of joint positions in the output pose and the second loss is computed based on a second set of control parameters associated with preservation of the second set of joint orientations in the output pose ([0065] a combination of L2 loss and geodesic loss used when training the neural network architecture 102 may provide a benefit of allowing the neural network architecture 102 to learn a high-quality pose representation (e.g., as an output 172)). Regarding claim 5, Oreshkin discloses wherein determining the set of joint representations comprises: generating, via execution of a second neural network, a first set of embeddings associated with a set of identities for the set of joints ([0142] the training phases 1602 may involve machine learning, in which the training data 1608 is structured (e.g., labeled during preprocessing operations), and the trained machine-learning program 1614 implements a relatively simple neural network 1628 (or one of other machine learning models, as described herein) capable of performing, for example, classification and clustering operations); determining, based on the base pose and the set of constraints, (i) a second set of joint positions for the set of joints and (ii) a second set of joint orientations for the set of joints ([0142] the training phase 1602 may involve deep learning, in which the training data 1608 is unstructured, and the trained machine-learning program 1614 implements a deep neural network 1628 that is able to perform both feature extraction and classification/clustering operations.); and converting, via execution of a third neural network, the second set of joint positions and the second set of joint orientations into a second set of embeddings for the set of joints ([0143] if an activation function generates a result that transgresses a particular threshold, an output may be communicated from that neuron (e.g., transmitting neuron) to a connected neuron (e.g., receiving neuron) in successive layers). Regarding claim 6, Oreshkin discloses wherein converting the set of joint representations into the set of updated joint states comprises generating the set of updated joint states based on the set of joint representations and a set of message-passing iterations ([0098] At the beginning of an iteration loop, an additional effector is added to a set of effectors output from a previous iteration loop). Regarding claim 7, Oreshkin discloses wherein generating the pose comprises: converting, via execution of one or more additional neural networks, the set of updated joint states into the first set of joint positions and the first set of joint orientations ; and updating the first set of joint positions and the first ([0149] one or more artificial intelligence agents, such as one or more machine-learned algorithms or models and/or a neural network of one or more machine-learned algorithms or models may be trained iteratively (e.g., in a plurality of stages) using a plurality of sets of input data)et of joint orientations based on a rest pose for the virtual character ([0149] continuously updated and retrained artificial intelligence agents may then be applied to subsequent novel input data to generate one or more of the outputs). Regarding claim 8, Oreshkin discloses wherein the set of constraints comprises at least one of a positional constraint, an orientation constraint, or a look-at constraint ([0028] a skeleton may include a hierarchical set of joints and may also include constraints on the joints (e.g., length of bones between joints, angular constraints, and more), which may provide a basic structure for the skeleton along with body shape values) Regarding claim 9, Oreshkin discloses wherein the first neural network comprises a set of cross-layer attention blocks associated with a plurality of resolutions for a skeletal structure of the virtual character ([0136] different machine-learning tools may be used. For example, Logistic Regression (LR), Naive-Bayes, Random Forest (RF), Gradient Boosted Decision Trees (GBDT), neural networks (NN), matrix factorization, and Support Vector Machines (SVM) tools may be used. In some examples, one or more ML paradigms may be used: binary or n-ary classification, semi-supervised learning, etc. In some examples, time-to-event (TTE) data will be used during model training. In some examples, a hierarchy or combination of models (e.g. stacking, bagging) may be used.). Regarding claim 10, Oreshkin discloses wherein the first neural network comprises a graph neural network (0146] A trained neural network model (e.g., a trained machine learning program 1614 using a neural network 1628) may be stored in a computational graph format, according to some examples. An example computational graph format is the Open Neural Network Exchange (ONNX) file format, an open, flexible standard for storing models which allows reusing models across deep learning platforms/tools, and deploying models in the cloud (e.g., via ONNX runtime).) Regarding claim 11, Oreshkin discloses one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations ([0117] read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies) comprising: determining a set of joint representations corresponding to a set of joints in a virtual character based on (i) a base pose for the virtual character and (ii) a set of constraints associated with one or more joints included in the set of joints ([0028] a skeleton may include a hierarchical set of joints and may also include constraints on the joints (e.g., length of bones between joints, angular constraints, and more), which may provide a basic structure for the skeleton along with body shape value); generating, via execution of a first neural network, a set of updated joint states for the set of joints based on the set of joint representations ([0032] look-at effector is generic in that it allows a model of a neural network architecture within the ML pose prediction system); and generating, based on the set of updated joint states, an output pose that includes (i) a first set of joint positions for the set of joints and (ii) a first set of joint orientations for the set of joints ([0028] effectors do not define a pose of a character, they provide constraints for a variable number of joints that are used to satisfy a final pose (e.g., at the output of the SMPL-IK pose prediction system 100)). Regarding claim 12, Oreshkin discloses wherein the operations further comprise training the first neural network using a first loss that is computed based on the first set of joint positions, a second set of joint positions included in the base pose, and a first set of control parameters associated with preservation of the second set of joint positions in the output pose ([0062] Individual loss terms may be combined additively (e.g., with loss weight factors for each) into a total loss term). Regarding claim 13, Oreshkin discloses wherein the operations further comprise further training the first neural network using a second loss that is computed based on the first set of joint orientations, a second set of joint orientations included in the base pose, and a second set of control parameters associated with preservation of the second set of joint orientations in the output pose ([0065] a combination of L2 loss and geodesic loss used when training the neural network architecture 102 may provide a benefit of allowing the neural network architecture 102 to learn a high-quality pose representation (e.g., as an output 172)).. Regarding claim 14, Oreshkin discloses wherein determining the set of joint representations comprises: generating a set of joint embeddings included in the set of joint representations based on (i) a set of identities for the set of joints, (ii) a set of control parameters associated with preservation of the base pose in the output pose, and (iii) the set of constraints ([0142] the training phases 1602 may involve machine learning, in which the training data 1608 is structured (e.g., labeled during preprocessing operations), and the trained machine-learning program 1614 implements a relatively simple neural network 1628 (or one of other machine learning models, as described herein) capable of performing, for example, classification and clustering operations); and determining, based on the base pose and the set of constraints, a set of initial joint states corresponding to (i) a second set of joint positions for the set of joints and (ii) a second set of joint orientations for the set of joints ([0142] the training phase 1602 may involve deep learning, in which the training data 1608 is unstructured, and the trained machine-learning program 1614 implements a deep neural network 1628 that is able to perform both feature extraction and classification/clustering operations.). Regarding claim 15, Oreshkin discloses wherein converting the set of joint representations into the set of updated joint states comprises: computing a set of attention scores based on the set of joint representations ([0095] Retargeting refers to the task of transferring a pose of a first character to a target character, wherein the first character and target character have a different morphology (e.g., bone lengths) and possibly a different topology (e.g., number of joints, connectivity, etc.)); and generating the set of updated joint states based on the set of attention scores ([0095] Retargeting may be applied between skeletons of different morphologies and even topologies. For example, retargeting may be used to transfer a pose of a human captured using Motion Capture (MoCap) technology onto a custom humanoid character.). Regarding claim 16, Oreshkin discloses wherein the set of attention scores is further computed based on a plurality of graphs corresponding to different resolutions associated with the set of joints ([0095] , retargeting may be used to transfer a pose of a human captured using Motion Capture (MoCap) technology onto a custom humanoid character.). Regarding claim 17, Oreshkin discloses wherein the set of attention scores is further computed based on a set of masks associated with the one or more joints ([0097] Learned IK tools (e.g., including SMPL-IK) allow for pose authoring using very sparse constraints (e.g. using 5-6 effectors).). Regarding claim 18, Oreshkin discloses wherein generating the output pose comprises: converting, via execution of one or more additional neural networks, the set of updated joint states into the first set of joint positions and the first set of joint orientations ([0149] one or more artificial intelligence agents, such as one or more machine-learned algorithms or models and/or a neural network of one or more machine-learned algorithms or models may be trained iteratively (e.g., in a plurality of stages) using a plurality of sets of input data); and updating the first set of joint positions and the first set of joint orientations via a forward kinematics technique ([0149] one or more artificial intelligence agents, such as one or more machine-learned algorithms or models and/or a neural network of one or more machine-learned algorithms or models may be trained iteratively (e.g., in a plurality of stages) using a plurality of sets of input data)et of joint orientations based on a rest pose for the virtual character ([0149] continuously updated and retrained artificial intelligence agents may then be applied to subsequent novel input data to generate one or more of the outputs). Regarding claim 19, Oreshkin discloses wherein the first neural network comprises a graph transformer neural network (0146] A trained neural network model (e.g., a trained machine learning program 1614 using a neural network 1628) may be stored in a computational graph format, according to some examples. An example computational graph format is the Open Neural Network Exchange (ONNX) file format, an open, flexible standard for storing models which allows reusing models across deep learning platforms/tools, and deploying models in the cloud (e.g., via ONNX runtime).). Regarding claim 20, Oreshkin discloses a system ([0022] systems and methods described herein provide a flexible, learned IK solver (the SMPL-IK system described below) applicable to a wide variety of human morphologies,), comprising: one or more memories that store instructions ([0117] read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies), and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to perform operations ([0117] agrammatic representation of the machine 800 in the example form of a computer system, within which instructions 816 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 800 to perform any one or more of the methodologies discussed herein may be executed) comprising: determining a set of joint representations corresponding to a set of joints in a virtual character based on (i) a base pose for the virtual character and (ii) a set of constraints associated with one or more joints included in the set of joints ([0028] a skeleton may include a hierarchical set of joints and may also include constraints on the joints (e.g., length of bones between joints, angular constraints, and more), which may provide a basic structure for the skeleton along with body shape value); generating, via execution of a first neural network, a set of updated joint states for the set of joints based on the set of joint representations ([0032] look-at effector is generic in that it allows a model of a neural network architecture within the ML pose prediction system); and generating, based on the set of updated joint states, an output pose that includes (i) a first set of joint positions for the set of joints and (ii) a first set of joint orientations for the set of joints ([0028] effectors do not define a pose of a character, they provide constraints for a variable number of joints that are used to satisfy a final pose (e.g., at the output of the SMPL-IK pose prediction system 100)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sun et al (US 20210232924 A1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Dec 24, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month