Prosecution Insights
Last updated: April 19, 2026
Application No. 17/994,675

HUMAN MODEL RECOVERY BASED ON VIDEO SEQUENCES

Final Rejection §103
Filed
Nov 28, 2022
Examiner
TERRELL, EMILY C
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Intelligence Co. Ltd.
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
316 granted / 537 resolved
-3.2% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
555
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment received on 08/21/2025 has been entered and made of record. Claims 1, 4-12 and 15-20 are now pending. Response to Arguments/Remarks Applicant's arguments filed on 08/21/2025 with respect to claim 1 have been fully considered but they are not persuasive. Applicant argues that “HEULSDUNK merely discusses the determination of a present pose, and once that pose is determined, HEULSDUNK never contemplates going back and adjusting it based on another subset of images, much less to add a body part that was missing to the pose”. Examiner's response: HEULSDUNK discloses (e.g., Fig. 1 and pg. [0113]) tracking all joints (i.e. pose) of a person in a video stream. The very act of tracking the joints of a person involves adjusting locations of the joints over time based on incoming new video frames. It is to note that claim 1 does not require “going back and adjusting” the initial pose/shape. In fact, the Examiner believes the “going back and adjusting” feature is not supported by the original disclosure. Fig. 3 of the instant application shows that two separate sets of video sequence images (302 and 304) are input to neural network(s) to obtain two separate pose/shape parameters (308/309 and 312/314) and to generate their corresponding human models (316 and 318). According to Fig. 3, the second set of images are never used to adjust the first pose/shape parameters or the first human model. In addition, HEULSDUNK discloses determining a skeletal representation of a person (Fig. 4) that represents both pose (i.e. “the 3D absolute location of each (or some) of the person's joints”, see pg. [0032]) AND shape (i.e., figure or build) of the person. The model of a human is updated over time and used to determine certain events/states (pg. [0228]-[0229]) In view of this reasonable interpretation of the claims and the prior art, the Examiner respectfully submits that the rejections set forth below are proper. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-6, 10-12, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over HUELSDUNK et al. (US 2021/0192783), and in view of Morris et al. (US 2021/0059565), hereafter referred to as “HUELSDUNK” and “Morris”, respectively. Regarding claim 1, HUELSDUNK discloses an apparatus, comprising: one or more processors (pg. [0030]) configured to: obtain a video sequence depicting movements of a person (Fig. 1, operation 5 and pg. [0030], e.g. video of a person (“articulated object”) catches a ball (non-articulated object)) determine an initial two-dimensional (2D) or three-dimensional (3D) representation of the person based on at least a first subset of images from the video sequence, wherein the initial 2D or 3D representation of the person represents a first pose and a first body shape of the person (Fig. 1, operations 7&10 and pg. [0109]-[111], estimate 2d pose and derive estimation of relative 3D joint locations of the person. The skeletal model as shown in Fig. 4 represents both a pose and a shape of a human body. Also see pg. [0228]-[0229] the model of a human is updated over time and used to determine certain events/states) adjust the initial 2D or 3D representation of the person based on at least a second subset of images from the video sequence (please refer to “Response to Arguments/Remarks” section Fig. 1, operations 20&40 and pg. [0113]-[0120], [0134]-[0138] “use object tracking or re-identification to track or recognize objects over time” tracking person’s handling of the ball results in adjustment to the positions of the joints of the person), HUELSDUNK does not expressly disclose using the method in a medical environment, or that the adjusted 2D or 3D representation of the person includes a representation of a body part of the person that is missing from the initial 2D or 3D representation of the person. However, HUELSDUNK’s method is clearly generic in nature and can be used in any applications that need to determine the absolute 3D locations of a person’s joints (HUELSDUNK, Figs. 1&4), including medical applications, as for example shown in Morris (Figs. 1&4). Morris’s system performs video-based tracking of a patient moving in a medical environment to obtain gait metrics for the patient. In addition, given the fact that HUELSDUNK’s system is used to track a person playing a ball, there certainly will be times when some part of his body (e.g., hand) is blocked from view (e.g., by the ball, by his torso when his back is turned toward the camera, and/or by other obstacles/people). The same could apply to Morris’s system when the patient is moving around. Part of the patient could be blocked by other people or obstacles. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Morris with that of HUELSDUNK to yield the invention as described in claim 1. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce the predictable results of more accurately determine joint positions/angles of a patent so as to improve assessment of neurodegeneration of the patient (Morris, Fig. 1). Regarding claim 4, HUELSDUNK in view of Morris discloses the apparatus of claim 1, wherein the one or more processors are configured to reconstruct the body part based on a depiction of the body part in the second subset of images (see analysis of claim 1. Both HUELSDUNK and Morris generate 3D skeletal representation of the whole body in a video stream. Different body parts could be blocked/missing or reappearing at certain times). Regarding claim 5, HUELSDUNK in view of Morris discloses the apparatus of claim 1, wherein at least one of the initial 2D or 3D representation of the person or the adjusted 2D or 3D representation of the person is determined based on a machine- learning (ML) model (HUELSDUNK, Fig. 1, operation 7 and pg. [0109] “the 2d pose is estimated, using … OpenPose”. OpenPose is a machine- learning (ML) model developed by CMU and uses a specialized, multi-stage CNN pipeline). Regarding claim 6, HUELSDUNK in view of Morris discloses the apparatus of claim 5, wherein the one or more processors are configured to implement the ML model via a convolutional neural network or a recurrent neural network (OpenPose uses a specialized, multi-stage CNN pipeline). Regarding claim 10, HUELSDUNK in view of Morris discloses the apparatus of claim 1, wherein the video sequence is captured by a single image capturing device (HUELSDUNK, Abstract and pg. [0155] “use the sensors of a mobile phone”). Most mobile phone cameras have red-green-blue (RGB) image sensor. Many smartphones also have depth sensor, infrared sensor, radar sensor, etc. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 10 from the teachings of HUELSDUNK in view of Morris. Regarding claim 11, HUELSDUNK in view of Morris discloses the apparatus of claim 1. Displaying on a computer/smartphone screen any useful information generated during image processing is well known and common practice in the art, as for example shown in HUELSDUNK (pg. [0076]) and Morris (Fig. 12, pg. [0050]). In a medical environment, using information obtained from patient to adjust medical procedure/device would have been quite obvious and expected. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 11 from the teachings of HUELSDUNK in view of Morris. Claims 12, 15-17 and 20 have been analyzed and are rejected for the same reasons as outlined above regarding claims 1, 4-6 and 10, respectively. Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over HUELSDUNK (US 2021/0192783) in view of Morris (US 2021/0059565), and further in view of Ramesh et al. (hereafter referred to as “Ramesh”, US 2024/0185453). Regarding claim 7, HUELSDUNK in view of Morris discloses the apparatus of claim 1. The HUELSDUNK and Morris combination as applied to claim 1 fails to further teach the remaining limitations of claim 7. In the same field of tracking patient movement, Ramesh discloses using multiple machine-learning models trained to process different body parts of the patient (Figs. 2-3, pg. [0050] “to use a single machine learning model and/or multiple machine learning models that may be trained using a single and/or different training data sets for assessing presence of weakness in one or more body parts of the subject 102…( e.g., a left arm) versus another part of the body ( e.g., a right arm, a torso, a leg, etc.)”). Using multiple specialized ML models leads to more precise processing of individual body parts. Combining results from these models is clearly needed to determine the posture/gait of the patient. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Ramesh with that of HUELSDUNK in view of Morris to yield the invention as described in claim 7. Claim 18 has been analyzed and is rejected for the same reasons as outlined above regarding claim 7. Claims 8-9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over HUELSDUNK (US 2021/0192783) in view of Morris (US 2021/0059565), and further in view of Karanam et al. (hereafter referred to as “Karanam”, US 2021/0158107). Regarding claims 8-9 and 19, HUELSDUNK in view of Morris fails to expressly disclose that 2D/3D representation of the person includes a 3D mesh model and the 3D mesh model includes a first plurality of parameters associated with a pose of the person a second plurality of parameters associated with a body shape of the person. However, using a 3D mesh model that includes pose parameters and shape parameters to represent a human body is well known and common practice in the art, as for example disclosed in Karanam (Figs. 2-5, pg. [0022], “a human mesh model may be recovered based on a 2D image of a person … one or more pose parameters θ, and one or more shape parameters, β, that may respectively indicate the pose and shape of the person's body”). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Karanam with that of HUELSDUNK in view of Morris to yield the invention as described in claims 8-9 and 19. Examiner’s note Applicant is encouraged to schedule a telephone interview with the Examiner to discuss any issues related to the claimed invention and the references cited in the current/previous Office Action(s). The Examiner can be reaches at (571)270-5363 (email: Li.Liu2@USPTO.GOV). Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LI LIU whose telephone number is (571)270-5363. The examiner can normally be reached on Monday-Friday, 8:00AM-4:30PM, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LI LIU/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
May 18, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Sep 08, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586167
MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12573072
SYSTEM AND METHOD FOR OBJECT DETECTION IN DISCONTINUOUS SPACE
2y 5m to grant Granted Mar 10, 2026
Patent 12561956
AFFORDANCE-BASED REPOSING OF AN OBJECT IN A SCENE
2y 5m to grant Granted Feb 24, 2026
Patent 12518397
AUTOMATED DETERMINATION OF A BASE ASSESSMENT FOR A POSE OR MOVEMENT
2y 5m to grant Granted Jan 06, 2026
Patent 12493960
USER INTERFACE FOR VISUALIZING DIFFERENCES BETWEEN MEDICAL IMAGE CONTOURINGS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
94%
With Interview (+35.4%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month