Prosecution Insights
Last updated: April 19, 2026
Application No. 18/172,239

REDUCING DOMAIN SHIFT IN NEURAL MOTION CONTROLLERS

Non-Final OA §103
Filed
Feb 21, 2023
Examiner
HOANG, PHI
Art Unit
2619
Tech Center
2600 — Communications
Assignee
ETH ZÜRICH
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
756 granted / 928 resolved
+19.5% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
25 currently pending
Career history
953
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 01 December 2025 was filed after the mailing date of the Final Rejection on 22 September 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments with respect to claim(s) 19 November 2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 8, 10-15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gafni et al. (US 11,017,560 B1) in view of Holden et al. (“Phase-functioned Neural Networks for Character Control”). Regarding claim 1, Gafni discloses a computer-implemented method for training a neural motion controller, the method comprising: determining a first set of features associated with a first control signal for a virtual character; (Column 5, lines 4-17 and column 7, lines 26-37, simulated input control for motion of a character in an input video) matching the first set of features to a first sequence of motions included in a plurality of sequences of motions; (Column 5, lines 4-17, the input video having characters or objects having a sequence of motion that is associated with the input control, column 6, lines 28-39) and training the neural motion controller based on one or more motions included in the first sequence of motions and the first control signal (Figure 2 and column 6, lines 28-39 and column 7, lines 38-54, the sequence of motion in video is used with the simulated input control to train a pose prediction network). Gafni does not clearly disclose the first control signal generated via an input device. Holden discloses a capture studio for obtaining motion capture data having control parameters used for training a system for character control (Section 4, paragraphs 1-3). Gafni discloses steps for training a training a neural motion controller through training a pose prediction network using input video and associated simulated input control which differs from the claimed process by the substitution of an input device for generating a first control signal. Holden discloses the substituted input device for generating a first control signal using a capture studio for obtaining motion capture data for training a system for character control. As a result, both functions were known in the art to enable a person of ordinary skill in the art to train a system for controlling the motion of a character. Gafni’s steps for using input video for training a pose prediction network could have been substituted with the motion capture data of Holden and the results would have been predictable, resulting in training a pose prediction network using motion capture data and associated simulated input control. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, Gafni discloses executing the trained neural motion controller to generate one or more additional motions based on a second set of features associated with a second control signal for the virtual character (Column 7, line 55 – column 8, line 3, more than one input control signals for a desired direction of a character). Regarding claim 3, Gafni in view of Holden discloses matching a second set of features associated with the first control signal to a second sequence of motions included in the plurality of sequences of motions; (Gafni, column 7, line 55 – column 8, line 3, multiple input directions can be provided for any sequence of poses from the input motion capture data, Holden, Section 4, paragraph 3) and training the neural motion controller to generate one or more additional motions included in the second sequence of motions based on the first control signal (Gafni, column 6, lines 28-39, any sequence of motion in a sequence of images can be used with any directional control input to train the pose prediction network). Regarding claim 4, Gafni discloses training the neural motion controller to generate a transition between the one or more motions and the one or more additional motions (Column 6, lines 42-50, transition poses). Regarding claim 6, Gafni discloses all limitations as discussed in claim 1. Gafni does not clearly disclose wherein determining the first set of features comprises: determining a velocity associated with the first control signal; and generating the first set of features for one or more future points in time based on the velocity and a current pose associated with the virtual character. However, Gafni discloses changes in the position of a character over time based in an input video (Column 5, lines 4-17). It would be obvious to one of ordinary skill in the art that a change in position and direction with respect to time would be a change in velocity provided by the direction control. Regarding claim 8, Gafni discloses wherein training the neural motion controller comprises: inputting the first control signal into the neural motion controller; (Column 7, lines 38-54, simulated motion control signal is applied to the pose prediction network) and training the neural motion controller based on a loss computed between one or more outputs generated by the neural motion controller from the first control signal and the one or more motions (Column 11, lines 17-26, application of loss during training). Regarding claim 10, Gafni discloses wherein the first set of features comprises at least one of a root position, a position of a body part, a trajectory position, a trajectory direction (Column 5, lines 4-17, trajectory of a center of mass for the character), or a root velocity. Regarding claims 11 and 20, similar reasoning as discussed in claim 1 is applied. Furthermore, with regard to claims 11 and 20, Gafni discloses one or more non-transitory computer-readable media or memories storing instructions executed by a processor (Column 17, lines 7-20). Regarding claim 12, similar reasoning as discussed in claim 2 is applied. Regarding claim 13, Gafni in view of Holden discloses matching a second set of features associated with the first control signal to a second sequence of motions included in the plurality of sequences of motions; (Gafni, column 7, line 55 – column 8, line 3, multiple input directions can be provided for any sequence of poses from the input motion capture data, Holden, Section 4, paragraph 3) generating a transition between the one or more motions included in the first sequence of motions and one or more additional motions included in the second sequence of motions; (Gafni, column 6, lines 42-50, transition poses) and training the neural motion controller to generate the transition and the one or more additional motions based on the first control signal (Gafni, column 6, lines 28-54, training the neural network to generate images including transition poses). Regarding claim 14, Gafni discloses generating the first set of features for a first set of frames associated with a first point in time within the first control signal; determining a second point in time within the first control signal that is a predetermined interval after the first point in time; and generating the second set of features for a second set of frames associated with the second point in time (Figure 4 and column 8, line 54 – column 9, line 3, direction control over a time progression). Regarding claim 15, Gafni discloses determining a motion-based attribute associated with the first control signal; (Column 5, lines 4-17, trajectory of a center of mass of a character) and generating the first set of features for one or more future points in time based on the motion-based attribute and a current pose associated with the virtual character (Column 8, line 54 – column 9, line 3, direction control over a time progression). Regarding claim 17, Holden discloses wherein the first control signal is generated via a motion tracking device (Section 4, paragraph 3, capture studio having sensor devices for acquiring motion capture data). Regarding claim 18, Gafni in view of Holden discloses wherein the first sequence of motions comprises one or more motion capture frames (Column 5, lines 4-17, input video of motion capture data at 60 fps, Holden, paragraph 3). Regarding claim 19, Gafni discloses wherein the one or more motions comprise one or more rotations associated with a root pose for the virtual character (Figure 4, rotations of limbs of the character with changes in the direction). Allowable Subject Matter Claims 5, 7, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 5, the prior art does not clearly disclose the computer-implemented method of claim 3, further comprising generating the second set of features based on a change in the first control signal that exceeds a threshold. Regarding claim 7, the prior art does not clearly disclose the computer-implemented method of claim 1, wherein matching the first set of features to the first sequence of motions comprises: computing the first set of features based on a weighted combination of multiple sets of features associated with the first control signal; and determining a match between the first set of features and a second set of features associated with the first sequence of motions. Regarding claim 16, the prior art does not clearly disclose the one or more non-transitory computer-readable media of claim 11, wherein matching the first set of features to the first sequence of motions comprises determining a distance between the first set of features and a second set of features associated with a first motion included in the first sequence of motions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zinno et al. (US 2024/0354581 A1) discloses training neural networks for outputting character poses. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHI HOANG whose telephone number is (571)270-3417. The examiner can normally be reached Mon-Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON CHAN can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHI HOANG/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
Jun 25, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Nov 19, 2025
Response after Non-Final Action
Dec 21, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602889
METHOD AND SYSTEM OF RENDERING A 3D IMAGE FOR AUTOMATED FACIAL MORPHING
2y 5m to grant Granted Apr 14, 2026
Patent 12592010
NEURAL NETWORK-BASED IMAGE LIGHTING
2y 5m to grant Granted Mar 31, 2026
Patent 12579624
DISPLAY DEVICE AND OPERATING DRIVING THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12561885
METHOD, SYSTEM, AND MEDIUM FOR ARTIFICIAL INTELLIGENCE-BASED COMPLETION OF A 3D IMAGE DURING ELECTRONIC COMMUNICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561866
CONTENT-SPECIFIC-PRESET EDITS FOR DIGITAL IMAGES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+17.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month