Prosecution Insights
Last updated: April 19, 2026
Application No. 18/771,451

RUNTIME MOTION ADAPTATION FOR PRECISE CHARACTER LOCOMOTION

Non-Final OA §103
Filed
Jul 12, 2024
Examiner
ZHAI, KYLE
Art Unit
2611
Tech Center
2600 — Communications
Assignee
ETH ZÜRICH
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
353 granted / 473 resolved
+12.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
504
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 473 resolved cases

Office Action

§103
zNotice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-2, 8, 11-12 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al. (Path planning directed motion control of virtual humans in complex environments, Journal of Visual Languages and Computing, 2014). Regarding claim 1, Teng et al. (hereinafter Teng) discloses a computer-implemented method for performing automated motion adaptation (Teng, [0001], “a method for generating motion synthesis data and a device for generating motion synthesis data”. In addition, in paragraph [0032], “A flow-chart of a method for generating motion synthesis data”. In addition, in paragraph [0033], “the system can automatically calculate a synthesized path SP2 that avoids this obstacle”), the method comprising: generating a set of one or more motion sequences based on motion capture data (Teng, [0005], “Data-driven motion synthesis methods are used to generate novel motions based on existing motion capture data. Motion graphs based methods are a set of methods that can synthesize novel motions from motion capture database”. Motion capture data is processed into motion sequences. The motion graphs can then generate additional motion sequences by combining or transitioning between original sequences); adapting the selected motion sequence such that the adapted motion sequence, when applied to an animation character model (Teng, [0005], “Motion graphs based methods are a set of methods that can synthesize novel motions from motion capture database. FIG. 1 illustrates the principle idea of motion graphs. In FIG. 1, nodes 1,..., 8 represent short motion clips, and directed edges between the nodes indicate the transition information between motion clips. Novel motions are synthesized from motion graphs using the "depth first search" algorithm according to optimization rules”. In addition, in paragraph [0033], “FIG. 6 shows exemplary applications of the invention. In Path fitting (FIG. 6 a), a system according to the invention searches for a given path GP the motion graphs and generates synthetic motion by concatenating motion segment nodes to fit the path SP1”. The depth first search with optimization rules modifies and combines existing motion sequences to produce new motion sequences). Teng does not expressly disclose “selecting one of the set of motion sequences based on a score function value”; JP7265607 discloses selecting a motion sequence based on a score function value (JP7265607, “A scored similarity score is calculated. In one example, the motion sequence determination unit 25 calculates similarity scores of all indexes (all motion sequences) in the motion DB” and “the motion sequence determination unit 25 determines the motion sequence associated with the index with the highest average value”). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to modify the motion sequence generation system of Teng by incorporating the scoring and selection technique taught by JP7265607, in which a score corresponding to each motion sequence is calculated and the motion sequence associated with the highest value is selected. The motivation for doing so would have been providing smoother and more realistic character animation. In addition, Teng as modified by JP7265607 does not expressly disclose “the animation character model to move from a specified starting position associated with the animation character to a specified goal position associated with the animation character”; Song et al. discloses an animation character model to move from a specified starting position associated with the animation character to a specified goal position associated with the animation character (Song, Fig. 1 illustrates the coordinates of start and goal points and Fig. 9 illustrates an example complex environment with some wooden boxes, a small creek and three dynamic solid spheres. The left bottom point in green is the initial point, the right point in blue is the first target point and the other one in green is the target point). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to modify the virtual character motion of Teng using the motion synthesis techniques of Song, which apply optimal path planning to direct motion synthesis. The motivation for doing so would have been generating realistic character motions in response to complex dynamic environment. Regarding claim 2, Teng discloses one or more recordings based on movements of a human actor (Teng, [0003], “motion capture based approaches are more popular: this kind of methods records the motion of a live character, and then plays the animation faithfully and accurate”). Regarding claim 8, Teng teaches the adapted motion sequence; Teng as modified by JP7265607 and Song with the same motivation from claim 1 discloses avoid foot collisions (Song, Fig. 8 illustrates When virtual human is about to colliding with the ball, it walks into the first gap and moves on. (h)–(i) The ball on the right moves to the left. When virtual human is about to colliding with the ball, it moves into the second gap until the ball passed and moves to the target point (j)). Regarding claim 11, Teng discloses one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps (Teng, [0069], “the device further comprises memory means 131 and one or more memory control means 132 for generating a motion database and storing the transition point data in a motion database”. In addition, in paragraph [0072], “Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features may, where appropriate be implemented in hardware, software, or a combination of the two”). The steps recite in claim 11 are similar in scope to the method recited in claim 1 and therefore are rejected under the same rationale. Regarding claim 12, claim 12 recites instruction that is similar in scope to the method step recited in claim 2 and therefore is rejected under the same rationale. Regarding claim 17, Teng discloses a system (Teng, [0031], “This invention proposes a novel motion synthesis system”. In addition, in paragraph [0069], “the device further comprises memory means 131 and one or more memory control means 132 for generating a motion database and storing the transition point data in a motion database”) comprising: one or more memories storing instructions (Teng, [0069], “the device further comprises memory means 131 and one or more memory control means 132 for generating a motion database and storing the transition point data in a motion database”. In addition, in paragraph [0072], “Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features may, where appropriate be implemented in hardware, software, or a combination of the two”); and one or more processors for executing the instructions (Teng, [0069], “the device further comprises memory means 131 and one or more memory control means 132 for generating a motion database and storing the transition point data in a motion database”. The instructions can be executed by a device including a processor). The instructions recite in claim 17 are similar in scope to the method recited in claim 1 and therefore are rejected under the same rationale. Regarding claim 18, claim 18 recites instruction that is similar in scope to the method step recited in claim 2 and therefore is rejected under the same rationale. Claims 3, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claims 2, 12 and 18, in further view of Arenas-Mena et al. (A Motion Capture based Planner for Virtual Characters Navigating in 3D Environments, Computación y Sistemas, 2012). Regarding claim 3, Teng teaches a motion capture of the human actor (Teng, [0003], “motion capture based approaches are more popular: this kind of methods records the motion of a live character, and then plays the animation faithfully and accurate”); Teng as modified by JP7265607 and Song does not expressly disclose “plan prescribes a starting position, ending position, and ending orientation”; Arenas-Mena et al. (hereinafter Arenas-Mena) discloses a plan prescribes a starting position, ending position, and ending orientation (Arenas-Mena, 1 Introduction, [0003], “The planner takes as inputs the initial and final positions and orientations of the character as well”). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to determine the motion capture sequences of Teng using the concept of Arenas-Mena, which prescribes a starting position, ending position and ending orientation. The motivation for doing so would have been ensuring that the resulting motion reaches a desired final pose. Regarding claim 13, claim 13 recites instruction that is similar in scope to the method step recited in claim 3 and therefore is rejected under the same rationale. Regarding claim 19, claim 19 recites instruction that is similar in scope to the method step recited in claim 3 and therefore is rejected under the same rationale. Claims 4, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claims 1, 11 and 17, in further view of Girard (US 2011/0012903). Regarding claim 4, Teng teaches set of motion sequences; Teng as modified by JP7265607 and Song does not expressly disclose “begins at a same origin location”; Girard discloses begins at a same origin location (Girard, [0042], “motion clips in a motion space are processed to include the same number of locomotion cycles and to have matching first and last frames”. The first frame of each motion clip is aligned to a common reference position reads on origin location). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to use the concept of Giard, which defines a common reference position to the motion sequences of Teng. The motivation for doing so would have been enabling accurate blending and planning of motion sequences. Regarding claim 14, claim 14 recites instruction that is similar in scope to the method step recited in claim 4 and therefore is rejected under the same rationale. Regarding claim 20, claim 20 recites instruction that is similar in scope to the method step recited in claim 4 and therefore is rejected under the same rationale. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claims 1 and 11, in further view of Zinno (US 9,827,496). Regarding claim 5, Teng teaches a motion sequence and the animation character; Teng as modified by JP7265607 and Song teaches an ending position; Teng as modified by JP7265607 and Song does not expressly disclose “a distance between the ending position associated with the motion sequence and the specified goal position associated with the animation character”; Zinno discloses a distance between reference goal feature of the target pose and corresponding feature in the motion capture frame (Zinno, col 11. 50-53, “calculating a distance between each reference feature of the target pose and of the corresponding reference feature in a motion capture frame”. ). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to apply the scoring function of Teng as modified by JP7265607 using the concept of Zinno, which calculates the distance between the reference feature and corresponding feature in the motion capture frame. The motivation for doing so would have been ensuring accurate spatial alignment. Regarding claim 15, claim 15 recites instruction that is similar in scope to the method step recited in claim 5 and therefore is rejected under the same rationale. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claims 5 and 15, in further view of Kamyshanska et al. (US 2012/0209770). Regarding claim 6, Teng teaches the motion sequence and the animation character; Teng as modified by JP7265607 and Song teaches an ending orientation (Song, Figs. 9-12, a final position includes an orientation); Teng as modified by JP7265607 and Song does not expressly disclose “an angle between the ending orientation associated with the motion sequence and a specified goal orientation associated with the animation character”; Kamyshanska et al. (hereinafter Kamyshanska) discloses an angle between an orientation associated with a body and a specified desired orientation (Kamyshanska, [0226], “Angle α, which defines the difference between the orientation of the body part of the user, and the body part of the reference user”. α representing current and desired spatial orientations). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to apply the scoring function of Teng as modified by JP7265607 using the concept of Kamyshanska, which measures an angle between an orientation associated with a body and a specified desired orientation. The motivation for doing so would have been facilitating accurate motion selection based on orientation. Regarding claim 16, claim 16 recites instruction that is similar in scope to the method step recited in claim 6 and therefore is rejected under the same rationale. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claim 1, in further view of Starke et al. (US 11,562,523). Regarding claim 7, Teng teaches analyzing, for each frame included in the motion capture data (Teng, [0014], “transforming the motion frames to standard coordinates for each frame of the motion clips, separating means for separating high-frequency motion data of the motion frames from low-frequency motion data of the motion frames”); Teng as modified by JP7265607 and Song does not expressly disclose “velocities associated with one or more bones included in the motion capture data”; Starke et al. (hereinafter Starke) discloses motion capture data, velocities associated with one or more bones included in the motion capture data (Starke, col 6. 3-6, “The two dimensional phase vector encodes characteristics of the local bone phase such as position, velocity, orientation, acceleration, and other characteristics”). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to include the velocities, taught by Starke, associated with one or more bones in the motion capture data of Teng. The motivation for doing so would have been improve the realism of motion capture data. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claim 1, in further view of Aksit et al. (US 2022/0301262). Regarding claim 9, Teng teaches the animation character model; Teng as modified by JP7265607 and Song does not expressly disclose “multiplying one or more adaptation matrices by one or more bone matrices associated with the animation character model”; Aksit et al. (hereinafter Aksit) discloses multiplying one or more adaptation matrices by one or more bone matrices associated with an animation character model (Aksit, [0091], “Linear blend skinning is the concept of transforming vertices inside a single mesh by a (blend) of multiple transforms. Each transform is the concatenation of a “bind matrix” that takes the vertex into the local space of a given “bone” and a transformation matrix that moves from that bone's local space to a new position”. LBS uses matrix multiplication to combine bone transforms with per-vertex transform reads on multiplying one or more adaptation matrices by one or more bone matrices). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to determine the motion sequences of Teng using the concatenation of matrices, taught by Aksit. The motivation for doing so would have been enabling accurate transformation of motion sequences. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Teng et al. (US 2013/0300751) in view of JP7265607 in view of Song et al., as applied to claim 1, in further view of Miller, IV (US 2021/0166459). Regarding claim 10, Teng teaches motion sequences; Teng as modified by JP7265607 and Song teaches begins and ends position (Song, Fig. 9); Teng as modified by JP7265607 and Song does not expressly disclose “an idle position”; Miller, IV (hereinafter Miller) discloses begins and ends in an idle position (Miller, [0183], “a blend point BP1 can represent the avatar before and after Idle, because the animation clip starts and ends in the same state”. The blend point at the start corresponds to the idle pose, so the sequence begins in ide. By returning to the same state at the end, the sequence ends in idle). It would have been obvious to a person of ordinary skill in the art before the effective date of the claimed invention to apply the concept of Miller’s idle state to the motion sequences of Teng. The motivation for doing so would have been ensuring smooth transitions between motions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ZHAI whose telephone number is (571)270-3740. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at (571) 272 - 7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE ZHAI/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602879
METHOD AND DEVICE FOR PROVIDING SURGICAL GUIDE USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12594123
VIRTUAL REALITY SYSTEM WITH CUSTOMIZABLE OPERATION ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12590811
METHOD, APPARATUS, AND PROGRAM FOR PROVIDING IMAGE-BASED DRIVING ASSISTANCE GUIDANCE IN WEARABLE HELMET
2y 5m to grant Granted Mar 31, 2026
Patent 12573162
MODELLING METHOD FOR MAKING A VIRTUAL MODEL OF A USER'S HEAD
2y 5m to grant Granted Mar 10, 2026
Patent 12566580
HOLOGRAPHIC PROJECTION SYSTEM, METHOD FOR PROCESSING HOLOGRAPHIC PROJECTION IMAGE, AND RELATED APPARATUS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 473 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month