Prosecution Insights
Last updated: April 19, 2026
Application No. 18/589,188

USING SPATIAL RELATIONSHIPS FOR ANIMATION RETARGETING

Non-Final OA §102§103
Filed
Feb 27, 2024
Examiner
DEMETER, HILINA K
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
472 granted / 659 resolved
+9.6% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
686
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
61.0%
+21.0% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 659 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dionne et al. (US Publication Number 2017/0032560 A1). (1) regarding claim 19: As shown in fig. 1, Dionne disclosed one or more processors (102, processor, fig. 1) comprising: processing circuitry to generate an animation of a target character in a pose corresponding to a source character based at least on adjusting one or more joints associated with the target character using one or more first sets of points associated with a first mesh of the source character and one or more second sets of points associated with a second mesh of the target character (para. [0104], note that a set of markers, para. [0099], note that markers are placed at higher densities in areas of the mesh representation that exhibit relatively complex motion and are associated with relatively complex portions of the joint hierarchy, is identified on the mesh representation of the source animated character. Likewise, a set of markers is the mesh representation of the target character. The source mesh representation is deformed to match the target mesh representation. The point-to-point geometric correspondence between the deformed source mesh representation and the target mesh representation is determined, linking together markers with similar semantic meanings. Also see, para. [0106], note that adjustments can be made to the skinning weights of the source animated character to correct or modify the animated character. The adjusted skinning weights can be transferred to the target characters, thereby propagating the adjusted skinning weights to the relevant target). (2) regarding claim 20: Dionne further disclosed the one or more processors of claim 19, wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing one or more simulation operations; a system for performing one or more digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing one or more deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing one or more generative AI operations; a system for performing operations using a large language model; a system for performing one or more conversational AI operations; a system for generating synthetic data; a system associated with a gaming application; a system associated with a three-dimensional content application; a system for presenting at least one of virtual reality content, augmented reality content, or mixed reality content; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (para. [0034], note that FIG. 2 is a block diagram of a character attribute transfer system 200 that may be implemented via the computing system 100 of FIG. 1, according to various embodiments of the present invention. In some embodiments, at least a portion of the character attribute transfer system 200 may be implemented via the computing system 100 of FIG. 1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-6, 8, 10-12, 14-16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dionne et al. (US Publication Number 2017/0032560 A1) in view of Starke et al. (US Publication Number 2023/0267668 A1, hereinafter “Starke”). (1) regarding claim 1: As shown in fig. 6, Dionne disclosed a method (para. [0019], note that FIG. 6 is a flow diagram of method steps for transferring attributes from a source animated character to a target character) comprising: determining first points associated with a source character that correspond to second points associated with a target character (para. [0038], note that the geometric correspondence engine 232 establishes this geometric correspondence by generating a mapping between a set of markers on the source animated character and a set of markers on the source animated character); determining one or more first sets of points from the first points (para. [0099], note that at step 602, where a geometric correspondence engine 232 included in a character attribute transfer system 200 identifies marker points on the mesh representation associated with a source animated character); determining one or more first vectors associated with the one or more first sets of points (para. [0094], note that the rig control includes a set of controllers and constraints that define how an animated character can pose and move. The (u, v) coordinates define positions of one or more textures that are projected onto the mesh representation, where such textures enhance the outer appearance of the animated character); and causing, based at least on the pose, a presentation of the target character (para. [0106], note that adjustments can be made to the skinning weights of the source animated character to correct or modify the animated character. The adjusted skinning weights can be transferred to the target characters, thereby propagating the adjusted skinning weights to the relevant target. Also see para. [0103], note that the rig control includes a set of controllers and constraints that define how an animated character can pose and move). Dionne disclosed most of the subject matter as described as above except for specifically teaching determining, based at least on adjusting one or more second vectors associated with one or more second sets of points from the second points to correspond with the one or more first vectors, a pose associated with the target character. However, Starke disclosed determining, based at least on adjusting one or more second vectors associated with one or more second sets of points from the second points to correspond with the one or more first vectors, a pose associated with the target character (para. [0076], note that the feature vectors may be input to the in-training neural network 800A, which outputs predicted twist vectors. Together with the axis vectors and twist vectors (e.g., to determine rotation vectors, and thus rotation data), the loss functions may determine adjustments to nodes of the in-training neural network 800A based on differences in the expected rotation data and the rotation data of the in-training neural network 800A). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach determining, based at least on adjusting one or more second vectors associated with one or more second sets of points from the second points to correspond with the one or more first vectors, a pose associated with the target character. The suggestion/motivation for doing so would have been in order to create animation data that are more life-like and realistic. The system can obtain a set of axis vectors for a rig of a virtual character model; obtain a twist model for the rig; input the set of axis vectors to the twist model to obtain a set of twist vectors; and determine animation data based on the set of axis vectors and the set of twist vectors (abs.). Therefore, it would have been obvious to combine Dionne with Starke to obtain the invention as specified in claim 1. (2) regarding claim 2: Dionne further disclosed the method of claim 1, wherein the determining a set of points from the one or more first sets of points comprises: determining a distance between a first point and a second point from the set of points (para. [0048], note that the first term (Equation 2a) minimizes the distance between the linear combination of the vertex positions and the joint position); and determining that the distance is less than a threshold distance (para. [0070], note that each vertex included in the mesh representation is associated with a set of skinning weights for a subset of the elements of the skeleton that are within a threshold distance from the vertex). (3) regarding claim 4: Dionne further disclosed the method of claim 1, wherein the determining that the first points correspond to the second points comprises: determining one or more first points from the first points that are associated with one or more joint tags (para. [0095], note that mesh representation 300 is populated with markers, such as marker 310, 312, and 314. The mesh representation includes an arm 320, wrist 322, and hand 324 of an animated character); determining one or more second points from the second points that are associated with the one or more joint tags (para. [0095], note that the markers on the mesh representation 300 are placed at different densities depending on the complexity of movement and corresponding bone structure of the various components of the mesh representation 30); and determining, based at least on the one or more joint tags, that the one or more first points correspond to the one or more second points (para. [0095], note that a mesh representation 300 is populated with markers, such as marker 310, 312, and 314. The mesh representation includes an arm 320, wrist 322, and hand 324 i.e. the elbow joints correspond to the wrist joints). (4) regarding claim 5: Dionne disclosed most of the subject matter as described as above except for specifically teaching determining a first vector between the first point associated with a first mesh of the source character and a third point from the first points, the third point associated with a first joint of the source character; determining a second vector between the second point associated with a second mesh of the target character and a fourth point from the second points, the fourth point associated with a second joint of the target character that corresponds to the first joint of the source character; and determining that the first point corresponds to the second point based at least the first vector and the second vector. However, Starke disclosed determining a first vector between the first point associated with a first mesh of the source character and a third point from the first points, the third point associated with a first joint of the source character (para. [0026], note that the neural network may output twist data for a set of joints (e.g., of a source rig) based on axis data for the set of joints. Generally, the axis data indicates (based on position data for the set of joints) a set of axis vectors); determining a second vector between the second point associated with a second mesh of the target character and a fourth point from the second points, the fourth point associated with a second joint of the target character that corresponds to the first joint of the source character (para. [0026], note that each axis vector of the set of axis vectors point from a first joint to a second joint. For instance, each axis vector of the set of axis vectors is a direction that aligns a child joint and its parent. As an example, a parent joint may have one or multiple child joints, while each child may have exactly one parent joint. In some embodiments, each axis vector may point from a parent joint (associated with the axis vector) to a child joint of the parent joint); and determining that the first point corresponds to the second point based at least the first vector and the second vector (para. [0026], note that each twist vector of the set of twist vectors is orthogonal to a respective axis vector of a same joint. For each joint, using a twist vector and an axis vector associated with the joint, a rotation vector may be determined by calculating a cross-product thereof). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach determining a first vector between the first point associated with a first mesh of the source character and a third point from the first points, the third point associated with a first joint of the source character; determining a second vector between the second point associated with a second mesh of the target character and a fourth point from the second points, the fourth point associated with a second joint of the target character that corresponds to the first joint of the source character; and determining that the first point corresponds to the second point based at least the first vector and the second vector. The suggestion/motivation for doing so would have been in order to create animation data that are more life-like and realistic. The system can obtain a set of axis vectors for a rig of a virtual character model; obtain a twist model for the rig; input the set of axis vectors to the twist model to obtain a set of twist vectors; and determine animation data based on the set of axis vectors and the set of twist vectors (abs.). Therefore, it would have been obvious to combine Dionne with Starke to obtain the invention as specified in claim 5. (5) regarding claim 6: Dionne further disclosed the method of claim 1, wherein the determining the pose associated with the target character comprises: performing an optimization operation with respect to one or more degrees of freedom associated with one or more joints of the target character such that the one or more second vectors correspond to the one or more first vectors (para. [0101], note that at step 608, a skeleton transfer engine 234 included in the character attribute transfer system 200 transfers the skeleton from the source animated character to the target character based on the geometric correspondence. The skeleton transfer engine 234 calculates the target joint hierarchy via an optimization technique, while preserving the relationships between vertices in the source mesh representation and joints in the source joint hierarchy. The skeleton transfer engine 234 also performs a pose normalization procedure to ensure that joints in the target joint hierarchy and the source joint hierarchy are correctly aligned); and determining the pose associated with the target character based at least the one or more degrees of freedom as determined during the optimization operation (para. [0094], note that the rig control includes a set of controllers and constraints that define how an animated character can pose and move. The (u, v) coordinates define positions of one or more textures that are projected onto the mesh representation, where such textures enhance the outer appearance of the animated character. This flexibility provides the freedom to design skeletons with greater or fewer joints and with a hierarchy suited to a specific animation). (6) regarding claim 8: Dionne further disclosed the method of claim 1, further comprising: determining one or more third sets of points between the source character and a first object (para. [0095], note that a mesh representation 300 is populated with markers, such as marker 310, 312, and 314. The mesh representation includes an arm 320, wrist 322, and hand 324 of an animated character i.e. different sets of points are disclosed); determining one or more third vectors associated with the one or more third sets of points (para. [0099], note that the geometric correspondence engine 232 identifies marker points on the mesh representation associated with a target character). Dionne disclosed most of the subject matter as described as above except for specifically teaching determining, based at least on the one or more third sets of points, one or more fourth sets of points between the target character and a second object that corresponds to the first object, wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with the one or more fourth sets of points to correspond with the one or more third vectors. However, Starke disclosed determining, based at least on the one or more third sets of points, one or more fourth sets of points between the target character and a second object that corresponds to the first object (para. [0067], note that the axis data indicates (based on position data for the set of joints) a set of axis vectors. Each axis vector of the set of axis vectors point from a first joint to a second joint. For instance, each axis vector of the set of axis vectors is a direction that aligns a child joint and its parent), wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with the one or more fourth sets of points to correspond with the one or more third vectors (para. [0067], note that each twist vector of the set of twist vectors is orthogonal to a respective axis vector of a same joint. For each joint, using a twist vector and an axis vector associated with the joint, a rotation vector may be determined (e.g., by taking a cross-product thereof). Therefore, for each joint, a 6 degree of freedom set of data may be determined (3 cartesian positions, 3 orientations) and a skinned mesh of a character may be generated that is realistic and moves smoothly (e.g., does not change rapidly based on small changes in position). In this manner, systems and methods of the disclosure may determine smooth movement of a character by avoiding oscillations around joints and unnatural pose). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach determining, based at least on the one or more third sets of points, one or more fourth sets of points between the target character and a second object that corresponds to the first object, wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with the one or more fourth sets of points to correspond with the one or more third vectors. The suggestion/motivation for doing so would have been in order to create determine smooth movement of a character by avoiding oscillations around joints and unnatural pose (para. [0067]). Therefore, it would have been obvious to combine Dionne with Starke to obtain the invention as specified in claim 8. (7) regarding claim 10: Dionne further disclosed the method of claim 1, further comprising: determining one or more third sets of points from the first points that are associated with joints of the source character (para. [0038], note that the geometric correspondence engine 232 establishes this geometric correspondence by generating a mapping between a set of markers on the source animated character and a set of markers on the source animated character); determining one or more third vectors associated with the one or more third sets of points (para. [0043], note that when computing the closest location for a specific vertex, the geometric correspondence engine 232 inspects the normal vectors of each candidate polygon. The geometric correspondence engine 232 rejects the candidate polygon if the angle with the vertex normal is greater than 90°). Dionne disclosed most of the subject matter as described as above except for specifically teaching determining one or more fourth sets of points from the second points that correspond to the one or more third sets of points, wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with one or more fourth sets of points to correspond with the one or more third vectors. However, Starke disclosed determining one or more fourth sets of points from the second points that correspond to the one or more third sets of points, wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with one or more fourth sets of points to correspond with the one or more third vectors (para. [0076], note that the position data of the rigs may be used to determine axis vectors of the rigs, which may be used to generate features vectors. The feature vectors may be input to the in-training neural network 800A, which outputs predicted twist vectors. Together with the axis vectors and twist vectors (e.g., to determine rotation vectors, and thus rotation data), the loss functions may determine adjustments to nodes of the in-training neural network 800A based on differences in the expected rotation data and the rotation data of the in-training neural network 800A). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach determining one or more fourth sets of points from the second points that correspond to the one or more third sets of points, wherein the determining the pose associated with the target character is further based at least on adjusting one or more fourth vectors associated with one or more fourth sets of points to correspond with the one or more third vectors. The suggestion/motivation for doing so would have been in order to create determine smooth movement of a character by avoiding oscillations around joints and unnatural pose (para. [0067]). Therefore, it would have been obvious to combine Dionne with Starke to obtain the invention as specified in claim 10. (8) regarding claim 18: Dionne further disclosed the system of claim 11, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing one or more simulation operations; a system for performing one or more digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing one or more deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing one or more generative AI operations; a system for performing operations using a large language model; a system for performing one or more conversational AI operations; a system for generating synthetic data; a system associated with a gaming application; a system associated with a three-dimensional content application; a system for presenting at least one of virtual reality content, augmented reality content, or mixed reality content; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (para. [0034], note that FIG. 2 is a block diagram of a character attribute transfer system 200 that may be implemented via the computing system 100 of FIG. 1, according to various embodiments of the present invention. In some embodiments, at least a portion of the character attribute transfer system 200 may be implemented via the computing system 100 of FIG. 1). The proposed rejection of claims 1-2, 4, 6 and 8, renders obvious the steps of the system claims 11-12,14-15 and 16 (system of fig. 2, 202 processors) because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims -2, 4, 6 and 8 are equally applicable to claims 11-12,14-15 and 16. Claim(s) 3, 7, 9, 13 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dionne and Starke, further in view of Villegas et al. (US Publication Number 2023/0037339 A1, hereinafter “Villegas”). (1) regarding claim 3: Dionne disclosed most of the subject matter as described as above except for specifically teaching wherein the determining a set of points from the one or more first sets of points comprises: determining a first distance between a first point and a second point from the set of points when the source character is at a current pose; determining a second distance between the first point and the second point when the source character is at a set pose; determining a third distance based at least on the first distance and the second distance; and determining that the third distance is less than a threshold distance. However, Villegas disclosed determining a first distance between a first point and a second point from the set of points when the source character is at a current pose (para. [0049], note that the source object 126 is a computer-generated character model that is positioned in a reference pose (e.g., a T-pose). In additional or alternative aspects, the source object 126 includes data describing a real-world entity that is captured in motion (e.g., video footage of a human), or other visual or graphical objects that are associated with source motion); determining a second distance between the first point and the second point when the source character is at a set pose (para. [0077], note that the skeleton 302 is animated over a period of time that includes multiple time frames. Animation of the skeleton 302 includes different poses 308, including pose 308a, pose 308b, through pose 308n. Each particular pose of the poses 308 corresponds to a particular frame t from the multiple time frames, each frame representing a particular time during the period of time); determining a third distance based at least on the first distance and the second distance (para. [0093], note that the motion retargeting system 110 further defines a local distance field (e.g., a local 3D distance field) for each triangle in the form of a cone); and determining that the third distance is less than a threshold distance (para. [0095], note that the input set of foot contact constraints are calculated by identifying toe or heel joints (e.g., from the source motion 120) that are in within a threshold distance from the ground surface). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach wherein the determining a set of points from the one or more first sets of points comprises: determining a first distance between a first point and a second point from the set of points when the source character is at a current pose; determining a second distance between the first point and the second point when the source character is at a set pose; determining a third distance based at least on the first distance and the second distance; and determining that the third distance is less than a threshold distance. The suggestion/motivation for doing so would have been in order to provide the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object (abs.). Therefore, it would have been obvious to combine Dionne and Starke with Villelgas to obtain the invention as specified in claim 3. (2) regarding claim 7: Dionne disclosed most of the subject matter as described as above except for specifically teaching wherein the optimization operation includes gradient descent. However, Villegas disclosed wherein the optimization operation includes gradient descent (para. [0107], note that the optimizer engine 636 performs a quantity (e.g., N=30) of iterative gradient descent updates to hidden encoding units of the encoded states 622). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach wherein the optimization operation includes gradient descent. The suggestion/motivation for doing so would have been in order to provide the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object (abs.). Therefore, it would have been obvious to combine Dionne and Starke with Villelgas to obtain the invention as specified in claim 7. (3) regarding claim 9: Dionne disclosed most of the subject matter as described as above except for specifically teaching determining that a first point from the first points is static between a first frame and a second frame, wherein the first point is associated with a foot of the source character; determining a first location of a second point from the second points at the first frame, the second point corresponding to the first point; and determining a second location of the second point at a second frame, wherein the determining the pose associated with the target character is further based at least on the first location and the second location. However, Villegas disclosed determining that a first point from the first points is static between a first frame and a second frame, wherein the first point is associated with a foot of the source character (para. [0025], note that the contact-aware motion retargeting system identifies self-contacts, such as placement of hands on the body, and foot contacts, such as placement of feet on the ground); determining a first location of a second point from the second points at the first frame, the second point corresponding to the first point (para. [0025], note that the contact-aware motion retargeting system employs an energy function that preserves these self-contacts and foot contacts in the output motion, while reducing any self-penetrations in the output. Further, the contact-aware motion retargeting system uses self-contacts from the character's geometry and foot contacts from the character skeleton, in order to ensure that the self-contacts and foot contacts will be accurately transferred in the rendered); and determining a second location of the second point at a second frame, wherein the determining the pose associated with the target character is further based at least on the first location and the second location (para. [0045], note that the self-contact detection engine 117 identifies foot-ground contact constraints, such as a source vertex of a foot (e.g., a heel joint in a source skeleton, a foot vertex in a source geometry) that is within a threshold distance of a ground surface (e.g., in a virtual environment).). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach determining that a first point from the first points is static between a first frame and a second frame, wherein the first point is associated with a foot of the source character; determining a first location of a second point from the second points at the first frame, the second point corresponding to the first point; and determining a second location of the second point at a second frame, wherein the determining the pose associated with the target character is further based at least on the first location and the second location. The suggestion/motivation for doing so would have been in order to provide the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object (abs.). Therefore, it would have been obvious to combine Dionne and Starke with Villelgas to obtain the invention as specified in claim 9. The proposed rejection of claims 3 and 9, renders obvious the steps of the system claims 13 and 17 (system of fig. 2, 202 processors) because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims 3 and 9 are equally applicable to claims 13 and 17. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Taylor (US Publication Number 2020/0293881 A1) disclosed a method for training an animation character, including mapping first animation data defining a first motion sequence to a first subset of bones of a trained character, and mapping second animation data defining a second motion sequence to a second subset of bones. Jin et al. (NPL, “Aura Mesh: Motion Retargeting to Preserve the Spatial Relationship between Skinned Characters”, 2018) disclosed retarget an interaction motion to various skinned characters while preserving its interaction semantics. Liu et al. (NPL, “Surface based Motion Retargeting by Preserving Spatial Relationship”, 2018) disclosed context-aware motion retargeting framework, based on deforming a target character to mimic a source character pose using harmonic mapping. Any inquiry concerning this communication or earlier communication from the examiner should be directed to Hilina K Demeter whose telephone number is (571) 270-1676. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon could be reached at (571) 270- 0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR system, see http://pari-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HILINA K DEMETER/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602864
EVENT ROUTING IN 3D GRAPHICAL ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12592042
SYSTEMS AND METHODS FOR MAINTAINING SECURITY OF VIRTUAL OBJECTS IN A DISTRIBUTED NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12586297
INTERACTIVE IMAGE GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579724
EXPRESSION GENERATION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561906
METHOD FOR GENERATING AT LEAST ONE GROUND TRUTH FROM A BIRD'S EYE VIEW
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 659 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month