Prosecution Insights
Last updated: April 19, 2026
Application No. 18/560,081

Determining Visual Frailty Index Using Machine Learning Models

Non-Final OA §101§102§103
Filed
Nov 09, 2023
Examiner
PEDAPATI, CHANDHANA
Art Unit
2669
Tech Center
2600 — Communications
Assignee
The Jackson Laboratory
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
14 granted / 22 resolved
+1.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant This communication is in response to application filed on 11/09/2023. Amended claims filed on 08/01/2025 have been entered and have been considered in this office action. Claims 14-27, 31-37, 39, 43-99 are canceled. No new matter has been introduced. Claims 1-13, 28-30, 38 and 40-42 are currently pending in the application. Limitations appearing inside of {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13,28-30,38 and 40-42 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (Step 2A Prong 1) without additional imitations that integrate the abstract idea into a practical application (Step 2A Prong 2) and without amounting to significantly more than the abstract idea (Step 2B). Claim 1 recites a method, thus the claim is directed to a process which is a statutory category of invention (MPEP §2106.03). (Step 1: YES) Step 2A Prong 1 evaluates if the claim recites any judicial exception (MPEP §2106.04(a)). Claim 1 recites: receiving video data representing a video capturing movements of a subject; determining, using the video data, spinal mobility features of the subject for a duration of the video; processing, using at least one machine learning model, at least the spinal mobility features to determine a visual frailty score for the subject. The USPTO has enumerated groupings of abstract ideas (See §MPEP 2106.04(a)(2)), defined as: I) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; II) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and III) Mental processes – concepts performed in the human mind with or without physical aid (i.e., pen and paper, or by using a computer). These include an observation, evaluation, judgment, or opinion. The limitations [B] and [C] recite a mental process which is recognized by the court as an abstract idea (See MPEP §2106.04(a)(2)). The limitations determining spinal mobility and features and a visual frailty score can be performed in the human mind using a visual aid. The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim (MPEP §2106.04(a)(2)). (Step 2A Prong 1: YES) Step 2A Prong 2 analysis evaluates weather the claim recites additional elements that integrate the exception in to a practical application of that exception according to MPEP §2106.04(d) by: 1) Identify additional elements recited in the claim beyond the judicial exception; and 2) If additional elements are identified, then evaluate the additional elements both individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. Additional elements include receiving video [A], which is mere data gathering (MPEP §2106.05(g)). The limitation of performing a “computer implementation”, is per Specification (page 25, lines 2-4), shown below, a mere recitation of execution by a generic computing device (MPEP § 2106.05(f)). (Step 2A Prong 2: NO) PNG media_image1.png 196 899 media_image1.png Greyscale Step 2B analysis evaluates if the claim as a whole amounts to significantly more than the judicial exception. The invention provides a elements for improving the field by enabling automated supervision of an abstract idea. The claim as a whole does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claim as a whole recites abstract ideas (MPEP 2106.05(d)). (Step 2B: NO) Claims 2, 3, 4, 7, 8, 9, 10, 11, 13, 40 and 41 merely adds details on determining features to perform an abstract idea. Claim 5, 6, 28-30, 38, using a tool to perform an abstract idea. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-13, 28, 38, 40, 41 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by "Hession" (L. E. Hession, et al., "A machine vision based frailty index for mice," bioRxiv preprint doi: https://doi.org/10.1101/2021.09.27.462066, April 20, 2021), as cited in the IDS (01/30/2024)". Regarding claim 1, Hession teaches a computer-implemented method comprising: receiving video data representing a video capturing movements of a subject (Hession, Figure 2 exhibits Sample features used in the vFI (A) Single frame of the top-down open-field video; (D) Spatial, temporal, and whole-body coordination characteristics); determining, using the video data, spinal mobility features of the subject for a duration of the video (Hession, Figure 2, (F) Spinal mobility measurements taken at each frame. See Supplementary Video 3; [p. 6, §3.2, ¶5]; per video for all frames); and processing, using at least one machine learning model, at least the spinal mobility features to determine a visual frailty score for the subject (Hession, [p 13, §4, ¶1]; We then train machine learning classifiers that can accurately predict frailty from video features. Through modeling we also gain insight into feature importance across age and frailty status (i.e., visual frailty index)). Regarding claim 2, Hession teaches the computer-implemented method of claim 1. Hession further teaches wherein determining the spinal mobility features of the subject for the duration of the video comprises: determining a plurality of spinal measurements, each spinal measurement of the plurality of spinal measurements corresponding to one video frame of the video data; and determining the spinal mobility features using the plurality of spinal measurements (Hession [p 6, §3.2, ¶5]; This change in flexibility can be captured by the pose estimation coordinates of three points on the mouse at each video frame... For each of the three per-frame measures (dAC, dB, and aABC) a mean, median, standard deviation, minimum, and maximum are calculated per video for all frames). Regarding claim 3, Hession teaches the computer-implemented method of claim 1. Hession further teaches wherein determining the spinal mobility features of the subject for the duration of the video comprises: for each video frame of the video data: determining a first distance between a head of the subject and a tail of the subject; determining a second distance between a mid-back of the subject and a midpoint between the head and the tail; determining an angle formed between the head, the tail and the mid -back of the subject; and determining the spinal mobility features for a video frame to include the first distance, the second distance and the angle (Hession, [p 16, §5.5, ¶1]; The spinal mobility metrics used 3 points from the pose: the base of the head (A), the middle of the back (B) and the base of the tail (C). For each frame, the distance between A and C (dAC), the distance between point B and the midpoint of line AC (dB), and the angle formed by the points A,B, and C (aABC) were measured.) Regarding claim 4, Hession teaches the computer-implemented method of claim 1. Hession further teaches wherein determining the spinal mobility features of the subject for the duration of the video comprises: determining, for each video frame of the video data, a distance between a mid -back of the subject and a midpoint between a head of the subject and a tail of the subject (Hession, [p 16, §5.5, ¶1]; The spinal mobility metrics used 3 points from the pose: the base of the head (A), the middle of the back (B) and the base of the tail (C). For each frame, the distance between A and C (dAC), the distance between point B and the midpoint of line AC (dB), and the angle formed by the points A,B, and C (aABC) were measured.). Regarding claim 5, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing, using at least an additional machine learning model (Hession, [p15, §5.3, ¶1]; We use a neural network trained to produce a segmentation mask of the mouse to produce an ellipse fit of the mouse at each frame as well as a mouse track. And See FIG 1A, shown below, exhibits pose estimation network (i.e., at least an additional network)), PNG media_image2.png 342 626 media_image2.png Greyscale PNG media_image3.png 112 849 media_image3.png Greyscale the video data to determine pose estimation data tracking, during the duration of the video, a location of at least a head of the subject, a tail of the subject, and a mid-back of the subject (Hession, [p 15, §5.4, ¶1]; The 12-point 2D pose estimation produced using a deep convolutional neural network trained as detailed in (Gait paper). The points captured are nose, left ear, right ear, base of neck, left forepaw, right forepaw, mid spine, left rear paw, right rear paw, base of tail, mid tail and tip of tail.); and using the pose estimation data to determine the spinal mobility features (Hession, [p6, §3.2, ¶5];This change in [spinal] flexibility can be captured by the pose estimation coordinates of three points on the mouse at each video frame: the back of the head (A), the middle of the back (B), and the base of the tail (C)) . Regarding claim 6, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing the video data to determine pose estimation data tracking, during the duration of the video, a location of at least twelve body parts of the subject; determining, using the pose estimation data, features for the subject (Hession, [p 3, §3.1, ¶1]; The open field video was processed by a tracking network and a pose estimation network, to produce a track, an ellipse-fit, and a 12-point pose of the mouse for each frame [23, 25]. These frame-by-frame measurements were used to calculate a variety of pervideo features.); and processing, using the at least one machine learning model, the features to determine the visual frailty score (Hession, [p 8, §3.4, ¶1]; given new video-generated features as input to the random forest model … predict the FI score. We conclude that frailty and age information is encoded in video data features that we have designed and can be successfully used to construct a vFI). Regarding claim 7, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: determining body features for the subject, the body features corresponding to at least one of a length of the subject, a width of the subject, and a distance between rear paws of the subject (Hession, [p 6, §3.2, ¶3]; We took the major and minor axes of the ellipse fitted to the mouse at each frame as an estimated length and width of the mouse respectively (Figure 2B). The distance between the rear paw coordinates for each frame were taken as another width measurement closer to the hips.); processing, using the at least one machine learning model, the body features to determine the visual frailty score (Hession, [p 8, §3.4, ¶1]; given new video-generated features as input to the random forest model … predict the FI score. We conclude that frailty and age information is encoded in video data features that we have designed and can be successfully used to construct a vFI). Regarding claim 8, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: determining a number of times a rearing event occurs during the duration of the video; determining a rearing length for each rearing event; processing, using the at least one machine learning model, the number times the rearing event occurs and the rearing length for each rearing event to determine the visual frailty score (Hession, [p 16, §5.5, ¶1]; For rearing, we took the coordinates of the boundary between the floor and wall of the arean (using OpenCV contour) and added a buffer of 4 pixels. Whenever the mouse’s nose point crossed the buffer, this frame was counted as a rearing frame. Each uninterrupted series of frames where the mouse was rearing (nose crossing the buffer) was counted as a rearing bout. The total number of bouts, the average length of the bouts, the number of bouts in the first 5 minute, and the number of bouts within minutes 5 to 10 were calculated.). Regarding claim 9, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing, using the at least one machine learning model, the video data to determine ellipse-fit data for the subject for the duration of the video; determining, using the ellipse-fit data, features for the subject (Hession, FIG 2 B, shown below and [p 6, §3.2, ¶3] describe frame-by-frame measures of ellipse-fit feature extraction; “We took the major and minor axes of the ellipse fitted to the mouse at each frame as an estimated length and width of the mouse respectively (Figure 2B, shown below). The distance between the rear paw coordinates for each frame were taken as another width measurement closer to the hips. The means and medians of the ellipse width, ellipse length, and rear paw width over all frames were used as per-video metrics.”) PNG media_image4.png 263 271 media_image4.png Greyscale PNG media_image5.png 262 857 media_image5.png Greyscale ; and processing, using the at least one machine learning model, the features to determine the visual frailty score (Hession, [p3, §3.1, ¶1]; The open field video was processed by a tracking network and a pose estimation network, to produce a track, an ellipse-fit, and a 12-point pose of the mouse for each frame [23, 25]. These frame-by-frame measurements were used to calculate a variety of per-video features). Regarding claim 10, Hession teaches the computer-implemented method of claim 1. Hession further teaches wherein determining spinal mobility features of the subject for a duration of the video comprises: determining a first set of video frames representing gait movements by the subject; determining a first set of spinal mobility features for the first set of video frames (Hession, [p 6, §3.2, ¶4]; We carried out similar analysis to explore age-related gait changes in the current cohort of mice (Figure 2D, E). Each stride is analyzed for its spatial, temporal, and whole-body coordination measures (Figure 2D), resulting in an array of measures of which the medians over all strides for each mouse are taken. We also looked into intra-mouse heterogeneity of gait features using standard deviations and inter-quartile range over all strides for each mouse.); determining a second set of video frames representing non-gait movements by the subject; and determining a second set of spinal mobility features for the second set of video frames (Hession, [p 6, §3.2, ¶5]; For each of the three per-frame measures (dAC, dB, and aABC) a mean, median, standard deviation, minimum, and maximum are calculated per video for all frames and for non-gait frames (frames where the mouse is not in stride); wherein the spinal mobility features include the first set of spinal mobility features and the second set of spinal mobility features (Hession, [p, §, ¶ ]; Many of these calculated metrics (i.e., from gait features) show a high correlation with FI score and age; We find some moderately high correlations showing relationships between spinal bend (i.e., from non-gait features) and FI score which contradict our hypothesis; and (Supplementary Table S2 and S3)). Regarding claim 11, Hession teaches the computer-implemented method of claim 1. Hession further teaches wherein the first set of spinal mobility features correspond to a distance between a mid-back of the subject and a midpoint between a head and a tail of the subject, and wherein the second set of spinal mobility features correspond to an angle formed between the head, the tail and the mid -back of the subject (Hession, [p 16, §5.5, ¶1]; The spinal mobility metrics used 3 points from the pose: the base of the head (A), the middle of the back (B) and the base of the tail (C). For each frame, the distance between A and C (dAC), the distance between point B and the midpoint of line AC (dB), and the angle formed by the points A,B, and C (aABC) were measured.). Regarding claim 12, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: determining, using the video data, gait measurements of the subject for the duration of the video (Hession, [p 16, §5.4, ¶2]; The gait metrics were produced as detailed in Shephard et al. (2020) [25]. Briefly, the stride cycles were defined by starting and ending with the left hind paw strike, tracked by the pose estimation. These strides were then analyzed for several temporal, spatial, and whole-body coordination characteristics, producing the gait metrics over the entire video.); and processing, using the at least one machine learning model, the gait features to determine the visual frailty score for the subject (Hession, [p 3, §3.1, ¶1]; The per-video features for each mouse were used as features in an array of machine learning models, including penalized linear regression (LR) [33], random forest (RF) [34], support vector machine (SVM) [35], and extreme gradient boosting (XGB) [36]; [p 8, §3.4, ¶1]; given new video-generated features as input to the random forest model … we can also predict the FI score to be within 1.03 [Symbol font/0xB1] 0.08 (3.9% [Symbol font/0xB1] 0.3%) of the actual frailty index, thereby demonstrating the robustness of the model. We conclude that frailty and age information is encoded in video data features that we have designed and can be successfully used to construct a vFI.). Regarding claim 13, Hession teaches the computer-implemented method of claim 1. Hession further teaches processing the video data to determine point data tracking movement, for the duration of the video, of a set of body parts of the subject (Hession, [p 16, §5.5, ¶1]; The tracking was used to produce locomotor activity… the spinal mobility metrics used 3 points from the pose: the base of the head (A), the middle of the back (B) and the base of the tail (C). For each frame, the distance between A and C (dAC), the distance between point B and the midpoint of line AC (dB), and the angle formed by the points A,B, and C (aABC) were measured); determining, using the point data, a plurality of stance phases and a plurality of swing phases represented in the video data; determining, based on the plurality of stance phases and the plurality of swing phases, a plurality of stride intervals represented in the video data (See FIG 2D, shown below, exhibits “Whole Body Coordination Characteristics”, which uses point data to determine stance and swing phases and stride intervals); and determining, using the point data, the gait measurements based on each stride interval of the plurality of stride intervals (See FIG 2D, shown below, exhibits “Spatial Characteristics” ). PNG media_image6.png 336 498 media_image6.png Greyscale Regarding claim 28, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing the video data to determine point data tracking movement, for the duration of the video, of a set of body parts, wherein the set of body parts comprises one or more of: the nose, base of neck, mid spine, left hind paw, right hind paw, base of tail, middle of tail and tip of tail (Hession, [p 15-16, §5.4, ¶1]; The points captured are nose, left ear, right ear, base of neck, left forepaw, right forepaw, mid spine, left rear paw, right rear paw, base of tail, mid tail and tip of tail) determining, using the point data, features for the subject (Hession, [p 16, §5.4, ¶2]; the stride cycles were defined by starting and ending with the left hind paw strike, tracked by the pose estimation. These strides were then analyzed for several temporal, spatial, and whole-body coordination characteristics, producing the gait metrics over the entire video.); and processing, using at least the one machine learning model, the features to determine the visual frailty score (Hession, [p 13, §4, ¶1]; We then train machine learning classifiers that can accurately predict frailty from video features. Through modeling we also gain insight into feature importance across age and frailty status (i.e., visual frailty index)). Regarding claim 38, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing the video data to determine gait measurements for the subject for the duration of the video (Hession, [p 6, §3.2, ¶4]; gait changes in mice (Figure 2D, E); Each stride is analyzed for its spatial, temporal, and whole-body coordination measures resulting in an array of measures of which the medians over all strides for each mouse are taken; Hession, [p 13, §4, ¶5]; Using neural networks trained to extract individual strides, we can look at the spatial, temporal, and whole-body coordination characteristics of gait for each mouse); processing the video data to determine behavior data identifying portions of the video where the subject exhibits a predetermined behavior (Hession, [p 6, §3.2, ¶2]; metrics taken in standard open field assays such as total locomotor activity, time spent in the periphery vs. center, and grooming bouts (Figure 2A)); and processing, using the at least one machine learning model, the spinal mobility features, the gait measurements and the behavior data to determine the visual frailty score (Hession, [p 13, §4, ¶1]; automated visual frailty index (vFI) using video-generated features to model FI score…then train machine learning classifiers that can accurately predict frailty from video features; features include spinal bend metrics [p 14, §4, ¶1], gait [p 13, §4, ¶5], and behavior data, e.g., open field metrics such as time spent in the periphery vs center, total distance travelled, and count of grooming bouts, [p 13, §4, ¶3]). Regarding claim 40, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: determining a physical condition of the subject using the visual frailty score (Hession, [p 13, §4, ¶1]; automated visual frailty index (vFI) using video-generated features to model FI score). Regarding claim 41, Hession teaches the computer-implemented method of claim 40. Hession further teaches wherein the physical condition is frailty (Hession, [p 3, §2, ¶1]; We use these features to construct a vFI (visual frailty index) assay that has high prediction accuracy). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Hession in view of Datta et al., (US 11020025 B2), hereinafter “Datta”. Regarding claim 29, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing the video data using an additional machine learning model to identify {a likelihood of} the subject exhibiting a grooming behavior for a plurality of video frames of the video data; and determining the visual frailty score using {the likelihood} of the subject exhibiting the grooming behavior (Hession, [p3, §3.1, ¶1]; The open field video was processed by a tracking network and a pose estimation network, to produce a track, an ellipse-fit, and a 12-point pose of the mouse for each frame [23, 25]. These frame-by-frame measurements were used to calculate a variety of pervideo features. These features included … neural network-based grooming [24]; [p 15, §4, §1]; The same videos can be reanalyzed for extraction of new features (behavioral and physiological) enabled by new technology to improve the vFI). Hession teaches grooming behavior, but does not explicitly disclose likelihood of grooming. However, Datta, in a similar field of endeavor of automatically identifying and 35 classifying behavior modules of animals by processing video recordings, teaches processing the video data (Datta, FIG 1 exhibits video recorder 100) using an additional machine learning model (Datta, [5:30-36]; vector autoregressive (AR) process capturing a stereotyped trajectory through PCA space. Additionally in that model, the switching dynamics between different modules were represented using a Hidden Markov Model (HMM). Together, this model is referred to herein as “AR-HMM.”) to identify a likelihood of the subject exhibiting a grooming behavior for a plurality of video frames of the video data (an AR-HMM model that measures probability of behaviors determining what the mouse is likely to do next ([Col 29:47-Col 30:11], see excerpt below). PNG media_image7.png 489 538 media_image7.png Greyscale PNG media_image8.png 283 529 media_image8.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include identifying the likelihood of behaviors as taught by Sheppard to the invention of Hession. The motivation to do so would be to use this method to identify behaviors in a variety of applications including, but not limited to, drug screening, drug classification, genetic classification, disease study including early detection of the onset of a disease (i.e., frailty), toxicology research, side-effect study, learning and memory process study, anxiety study, and analysis in consumer behavior. Regarding claim 30, Hession teaches the computer-implemented method of claim 1. Hession further teaches further comprising: processing the video data using an additional machine learning model to identify {a likelihood of} the subject exhibiting a predetermined behavior for a plurality of video frames of the video data; and determining the visual frailty score using the likelihood of the subject exhibiting the predetermined behavior (Hession, [p3, §3.1, ¶1]; The open field video was processed by a tracking network and a pose estimation network, to produce a track, an ellipse-fit, and a 12-point pose of the mouse for each frame [23, 25]. These frame-by-frame measurements were used to calculate a variety of pervideo features. These features included … neural network-based grooming [24]; [p 15, §4, §1]; The same videos can be reanalyzed for extraction of new features (behavioral and physiological) enabled by new technology to improve the vFI). Hession does not explicitly disclose identify[ing] a likelihood of grooming. However, Datta, in a similar field of endeavor of automatically identifying and 35 classifying behavior modules of animals by processing video recordings, teaches processing the video data (Datta, FIG 1 exhibits video recorder 100) using an additional machine learning model (Datta, [5:30-36]; vector autoregressive (AR) process capturing a stereotyped trajectory through PCA space. Additionally in that model, the switching dynamics between different modules were represented using a Hidden Markov Model (HMM). Together, this model is referred to herein as “AR-HMM.”) to identify a likelihood of the subject exhibiting a predetermined behavior for a plurality of video frames of the video data (an AR-HMM model that measures probability of behaviors determining what the mouse is likely to do next ([Col 29:47-Col 30:11], see excerpt below). PNG media_image7.png 489 538 media_image7.png Greyscale PNG media_image8.png 283 529 media_image8.png Greyscale Claim 42 is rejected under 35 U.S.C. 103 as being unpatentable over Hession in view of Veld et al., (Op het Veld LP, van Rossum E, Kempen GI, de Vet HC, Hajema K, Beurskens AJ. Fried phenotype of frailty: cross-sectional comparison of three frailty stages on various health domains. BMC Geriatr. 2015 Jul 9;15:77. doi: 10.1186/s12877-015-0078-0. PMID: 26155837; PMCID: PMC4496916.), hereinafter “Veld”. Regarding claim 42, Hession teaches the computer-implemented method of claim 40. Hession does not explicitly teaches wherein the physical condition is a pre-frailty condition (Veld, [p 3, §Methods: Fried frailty criteria; Col 2, ¶2]; Persons with a score of 1 or 2 are at intermediate risk for adverse outcomes or are considered to be pre-frail). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include pre-frail as a physical condition as taught by Veld to the invention of Hession. The motivation to do so would be to investigate factors that vary between different stages of frailty that may be clinically relevant. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hong et al., (US 10121064 B2), teaches three-dimensional home cage tracking and machine learning and would have been relied upon for teaching behavioral classification. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is 571-272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANDHANA PEDAPATI/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Nov 09, 2023
Application Filed
Nov 09, 2023
Response after Non-Final Action
Aug 01, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602896
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597095
INTELLIGENT SYSTEM AND METHOD OF ENHANCING IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12571683
ELEVATED TEMPERATURE SCREENING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548180
HOLE DIAMETER MEASURING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541829
MOTION-BASED PIXEL PROPAGATION FOR VIDEO INPAINTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month