Prosecution Insights
Last updated: April 19, 2026
Application No. 18/007,429

Automated Phenotyping of Behavior

Non-Final OA §102§103
Filed
Jan 30, 2023
Examiner
MOHAMMED, SHAHDEEP
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Jackson Laboratory
OA Round
3 (Non-Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
4y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
234 granted / 462 resolved
-19.4% vs TC avg
Strong +57% interview lift
Without
With
+56.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 10m
Avg Prosecution
59 currently pending
Career history
521
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 462 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-16, 18, 21-22 and 24-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Brunner et al. (US 2005/0163349; hereinafter Brunner), in view of Croutamel et al. (US 2021/0112781). Regarding claim 1, Brunner discloses a system and method for assessing motor and locomotor deficits and recovery. Brunner shows a computer-implemented method (see abstract and fig. 15) comprising: causing a plurality of image capture devices to capture a plurality of video feeds (see par. [008], [0014], [0035], [0083], claim 1), wherein the plurality of video feeds captures movements of a subject from different view (see par. [008], [0014], [0035], [0083], claim 1); receiving video data from the plurality of image captures devices (see par. [008], [0014], [0035], [0083], claim 1), wherein the video data comprises the plurality of video feeds (see fig. 15; par. [0015]); determining, using the video data, first point data identifying a location of a first body part of the subject for a first frame during a first time period (see fig. 15; par. [0013], [0015], [0035], [0038]); determining, using the video data, second point data identifying a location of a second body part of the subject for the first frame (see fig. 12, 15; par. [0013], [0015], [0035], [0038]); determining, using the first point data and the second point data, first distance data representing a distance between the first body part and the second body part (see par. [0023], [0030], [0035], [0042], [0061]; fig. 12, 15), wherein the distance between the first body part and the second body part is a first distance frame feature in the first frame (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15); determining a first feature vector corresponding to at least the first frame and a second frame (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12, 15), the first feature vector including at least the first distance data and second distance data (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12, 15); processing, using a trained model, at least the first feature vector, the trained model configured to identify a likelihood of the subject exhibiting a behavior during the first time period (see par. [0011], [0012], [0013], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15); and determining, based on the processing of at least the first feature vector, a first label corresponding to the first time period (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15), the first label identifying a first behavior of the subject during the first time period (see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15). But, Brunner fails to explicitly state wherein the model is a machine learning model; and the trained model is trained using a plurality of training frames, wherein the training frames were annotated to indicate behavior. Crouthamel discloses an adaptive sensor performance based risk assessment. Crouthamel teaches model is a machine learning model (see par. [0069]); and the trained model is trained using a plurality of training frames (see abstract; par. [0022], [0041], [0069]; claim 1), wherein the training frames were annotated to indicate behavior (see abstract; par. [0022], [0041], [0069]; claim 1). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing of the claimed invention, to have utilized the teaching of wherein the model is a machine learning model; and the trained model is trained using a plurality of training frames, wherein the training frames were annotated to indicate behavior in the invention of Brunner, as taught by Crouthamel, to be able to accurately monitor and manage animal subject. Regarding claim 2, Brunner shows determining, using the video data, third point data identifying a location of a third body part of the subject for the first frame (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15). Regarding claim 3, Brunner shows determining, using the first point data and the third point data, second distance data representing a distance between the first body part and the third body part (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15), wherein the distance between the first body part and the third body part is a second distance frame feature in the first frame (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15); determining a second feature vector corresponding to the first frame to include at least the second distance data (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15); and wherein processing using the trained model comprises processing the first feature vector and the second feature vector using the trained model (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15). Regarding claim 4, Brunner shows determining, using the first point data, the second point data and the third point data, first angle data representing an angle corresponding to the first body part, the second body part and the third body part, wherein the angle data is a first angle frame feature in the first frame (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15); determining a second feature vector corresponding to at least the first frame, the second feature vector including at least the first angle data; and wherein processing using the trained model further comprises processing the first feature vector and the second feature vector using the trained model (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15). Regarding claim 5, Burner shows determining, using the video data, fourth point data identifying a location of the first body part for a second frame during the first time period (see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); determining, using the video data, fifth point data identifying a location of the second body part for the second frame see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); determining, using the video data, sixth point data identifying a location of the third body part for the second frame see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); determining, using the fourth point data and the fifth point data, third distance data representing a distance between the first body part and the second body part for the second frame, wherein the distance between the first body part and the second body part is a first distance frame feature in the second frame see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); determining, using the fourth point data and the sixth point data, fourth distance data representing a distance between the first body part and the third body part for the second frame, wherein the distance between the first body part and the third body part is a fourth distance frame feature in the second frame see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); determining, using the fourth point data, the fifth point data and the sixth point data, second angle data representing an angle corresponding to the first body part, the second body part and the third body part for the second frame, wherein the angle data is a second angle frame feature in the second frame see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15); and determining the second feature vector to include at least the third distance data, the fourth distance data, and the second angle data see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15). Regarding claim 6, Brunner shows wherein the second distance data represents a distance between the first body part and the second body part for the second frame during the first time period (see fig. 12), calculating metric data corresponding to the first frame using at least the first distance data and the second distance data (see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15), wherein the first feature vector includes the metric data, and wherein the metric data represents statistical analysis corresponding to at least the first distance data and the second distance data (see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15), the statistical analysis being at least a mean (see par. [0011], [0012], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15). Regarding claim 9, Burner shows further comprising: processing the video data using an additional trained model to determine the first point data, wherein the first point data includes pixel data representing the location of the first body part (see par. [0014], [0022], [0037], [0044]-[0047]). Regarding claim 10, Burner shows processing the video data using an additional trained model to determine a likelihood that a pixel coordinate corresponds to the first body part (see par. [0043], [0051], [0052]; fig. 12, 13, 15) and determining the first point data based at least in part on the likelihood that a pixel coordinate corresponds to the first body part satisfying a threshold, the first point data including the pixel coordinate (see par. [0043], [0051], [0052]; fig. 12, 13, 15). Regarding claim 11, Burner shows determining, using the video data, additional point data identifying locations of at least 12 portions of the subject for the first frame, wherein the 12 portions include at least the first body part and the second body part (see fig. 13). Regarding claim 12, Burner shows determining additional distance data representing distances between a plurality of body portion-pairs for the first frame(see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12, 15), the plurality of body portion-pairs formed using pairs of the 12 portions of the subject(see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12, 13, 15), wherein each distance between the plurality of body portion-pairs is a distance frame feature in the first frame, and wherein the first feature vector includes the additional distance data (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12, 15). Regarding claim 13, Burner shows determining additional angle data representing angles corresponding to a plurality of body-portion trios for the first frame (see fig. 14; par. [0032]), the plurality of body portion-trios formed by selecting three of the 12 portions of the subject (see fig. 12-14; par. [0032]), wherein each angle corresponding to the plurality of body- portion trios is an angle frame feature in the first frame (see fig. 14; par. [0032]), and wherein the first feature vector includes the additional angle data (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0061], [0086]; fig. 12-15). Regarding claim 14, Burner shows determining additional feature vectors corresponding to six frames during the first time period (see par. [0012], [0015], [0023], [0030], [0035], [0038], [0042], [0061], [0086]; fig. 12-15), the six frames including at least the first frame and the second frame (see par. [0038]), wherein the six frames are a window of frames surrounding at least the first frame (see fig. 12; par. [0014], [0038], [0091]); calculating metric data using the additional feature vectors, the metric data representing at least one of a mean, a standard deviation, a median, and a median absolute deviation; and processing the metric data using the trained model to determine the first label data (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0043], [0045], [0061]). Regarding claim 15, Burner shows determining location data representing pixel coordinates of 12 portions of the subject for the first frame (see par. [0014], [0022], [0037], [0044]-[0047]; fig. 12-15), the location data including at least the first point data (see fig. 15; par. [0013], [0015], [0035], [0038]), the second point data and the third point data (see fig. 15; par. [0013], [0015], [0035], [0038]), and wherein processing the metric data using the trained model further includes processing the location data using the trained model (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0043], [0045], [0061]). Regarding claim 16, Burner shows determining additional feature vectors corresponding to 11 frames during the first time period (see par. [0012], [0015], [0023], [0030], [0035], [0038], [0042], [0061], [0086]; fig. 12-15), the 11 frames including at least the first frame and the second frame (see par. [0038]), wherein the 11 frames are a window of frames surrounding at least the first frame (see fig. 12; par. [0014], [0038], [0091]); calculating metric data using the additional feature vectors, the metric data representing at least one of a mean, a standard deviation, a median, and a median absolute deviation; and processing the metric data using the trained model to determine the first label (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0043], [0045], [0052], [0061]). Regarding claim 18, Burner shows determining additional feature vectors corresponding to 21 frames during the first time period (see par. [0012], [0015], [0023], [0030], [0035], [0038], [0042], [0061], [0086]; fig. 12-15), the 21 frames including at least the first frame and the second frame (see par. [0038]), wherein the 21 frames are a window of frames surrounding at least the first frame (see fig. 12; par. [0014], [0038], [0091]); calculating metric data using the additional feature vectors, the metric data representing at least one of a mean, a standard deviation, a median, and a median absolute deviation; and processing the metric data using the trained model to determine the first label (see par. [0012], [0015], [0023], [0030], [0035], [0042], [0043], [0045], [0052], [0061]). Regarding claim 21, Burner shows wherein the trained model is a classifier configured to process feature data corresponding to video frames to determine a behavior exhibited by the subject represented in the video frames (see par. [0011], [0012], [0013], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15), the feature data corresponding to portions of the subject (see par. [0011], [0012], [0013], [0015], [0023], [0030], [0035], [0042], [0061]; fig. 12, 15). Regarding claim 22, Burner shows the first body part is a mouth of the subject (see fig. 13); the second body part is right hind foot of the subject (see fig. 13); the trained model is configured to identify a likelihood of the subject exhibiting contact between the first body part and the second body part (see par. [0014], [0019], [0035], [0065]; the examiner notes that claim does not require direct contact); and the first label indicates the first frame represents contact between the first body part and the second body part (see par. [0011], [0012], [0014], [0015], [0023], [0030], [0035], [0042], [0061]; claim 1, fig. 12, 15). Regarding claim 24, Brunner shows wherein the video data corresponds to a first video capturing a top view of the subject and a second video capturing a side view of the subject (see fig. 1, 3, 6 and 10). Regarding claim 25, Brunner shows wherein the subject is a mammal, and wherein the mammal is one of a rodent (see fig. 1). Regarding claim 101, Brunner shows wherein the first behavior is one of shaking and flicking (see par. [0069]-[0079]), and Crouthamel teaches biting behavior (see par. [0127]) Response to Arguments The previous objection to claim 10 has been withdrawn in view of Applicant’s amendment to claim 10. Applicant’s arguments with respect to prior art rejection of claim 1 have been considered but are moot because the new ground of rejection does not rely on any rejection applied in the prior office action of record for any teaching or matter specifically challenged in the argument. The examiner has provided new prior art Crouthamel to address the newly added claim limitations in claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHDEEP MOHAMMED whose telephone number is (571)270-3134. The examiner can normally be reached Monday to Friday, 9am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M Kozak can be reached at (571)270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHDEEP MOHAMMED/Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Apr 16, 2025
Non-Final Rejection — §102, §103
Aug 19, 2025
Applicant Interview (Telephonic)
Aug 21, 2025
Response Filed
Aug 23, 2025
Examiner Interview Summary
Sep 04, 2025
Final Rejection — §102, §103
Jan 05, 2026
Response after Non-Final Action
Feb 09, 2026
Request for Continued Examination
Mar 04, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594060
ULTRASOUND DIAGNOSTIC APPARATUS, CONTROL METHOD OF ULTRASOUND DIAGNOSTIC APPARATUS, AND PROCESSOR FOR ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12582380
ENDOSCOPE AND DISTAL END BODY
2y 5m to grant Granted Mar 24, 2026
Patent 12564372
Tactile ultrasound method and probe for predicting preterm birth
2y 5m to grant Granted Mar 03, 2026
Patent 12555232
SUPERVISED CLASSIFIER FOR OPTIMIZING TARGET FOR NEUROMODULATION, IMPLANT LOCALIZATION, AND ABLATION
2y 5m to grant Granted Feb 17, 2026
Patent 12543960
SYSTEMS AND METHODS FOR MONITORING THE FUNCTIONALITY OF A BLOOD VESSEL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
99%
With Interview (+56.7%)
4y 10m
Median Time to Grant
High
PTA Risk
Based on 462 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month