Prosecution Insights
Last updated: April 19, 2026
Application No. 17/737,850

SYSTEMS, METHODS AND DEVICES FOR PREDICTING PERSONALIZED BIOLOGICAL STATE, PREDICTING PERSONALIZED BEHAVIOR, AND RECOMMENDING PERSONALIZED BEHAVIOR WITH MODELS PRODUCED WITH META-LEARNING

Non-Final OA §103
Filed
May 05, 2022
Examiner
ELSHAER, ALAAELDIN M
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
January Inc.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
2y 10m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
74 granted / 208 resolved
-16.4% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
245
Total Applications
across all art units

Statute-Specific Performance

§101
37.4%
-2.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§103
DETAILED ACTION This office action is based on the elected claim set filed on 09/09/2025. Claims 2, 8-10, 15-17, and 27-28 have been amended. Claims 2, 5, 7-12, 14-17, and 27-28 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/04/2024 is in accordance with the provisions of 37 CFR 1.97 and are considered by the Examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2, 5, 7-12, 14-17, and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Wexler et al. (US 2020/0375549 Al – “Wexler”) in view of Kraus (US 2019/0197360 A1) Regarding Claim 2 (Currently Amended), Wexler teaches a method for personalized blood glucose monitoring for a test subject comprising: (a) acquiring a plurality of task data sets from a plurality of subjects, each of the plurality of task data set comprising input values and output values from a different source associated with a subject in the plurality of subjects,, Wexler discloses a learning models algorithm performing operation on data sets input from plurality of different sources for a subject that performs actions [task data sets] where the data sets from plurality of different patients to include a time series blood glucose measurement, heart rate, meal data, exercise data, etc., used as input using a machine learning algorithm and provide predictive output value, e.g., level of concentration (Wexler: [Fig. 4], [0020]-[0021], [0065], [0071-0072], [0079], [0081]) wherein the acquiring comprises (i) using a continuous glucose monitor (CGM) sensor to obtain time-series blood glucose measurements of a subject, and(ii) using a heart rate monitor (HRM) sensor to obtain time-series heart rate monitor values of the subject; Wexler discloses input data for prediction(s) can include a time series of blood glucose values and heart rate data (Wexler: [0014], [0032], [0034], [0062], [0065-0068], [0078]). wherein the task data sets comprise time series data, with a value for one time being an input value, and a value for a subsequent time being an output value; Wexler discloses prediction(s) can include a time series of blood glucose values at (Wexler: [0022], [0037], [0055], [0064-0067], [0073], [0078], [0115]). (b) performing a meta-learning operation comprising: (1) processing at least a portion of the input values with a biology model that generates predicted output values comprising at least one blood glucose level corresponding to the input values, wherein the biology model comprises a neural network configured to process the time series data, such that a value for one time of the time series data is processed as an input value to the neural network and another value for a subsequent time of the time series data is produced as an output value of the neural network Wexler discloses processing input part of the data sets such as glucose data and input using glucose predictive model [biology model] to generate predicated output value such that predict blood glucose level in time series, for example, above 180 mg/dL indicating hyperglycemia using machine learning models that include neural networks (Wexler: [Fig. 4], [0023]-[0026], [0037], [0049], [0086]) (d) acquiring test subject data from the test subject by (i) using the CGM sensor to obtain time-series blood glucose measurements of the test subject, and (ii) using the HRM sensor to obtain time-series heart rate monitor values of the test subject, wherein the test subject data was not used for performing the meta-learning operation in (c), and wherein the test subject is not among the plurality of subjects in (a); Wexler discloses a patient-specific model training data from a single patient [test subject data] where glucose data from the patient is obtained and processed using CGM and hear rate sensor device(s) obtaining the data in time series such as data taken with minutes, hours, etc., where the patient-specific model is a learning model trained on a data of particular patient [not among the plurality of subjects] for which prediction is to be made [the test subject data was not used for performing the learning operation] (Wexler: [0020], [0033], [0037], [0077-0078], [0087-0089], [0115]) (e) training a personalized subject prediction model to predict at least one blood glucose level for the test subject, at least in part by re-training the meta-learned model with the test subject data, the test subject data set comprising input values and output values and being different from each of the task data sets, Wexler discloses ML algorithms trained using training data that includes blood glucose data, physical activity, personal data, etc., implemented to a patient-specific personalized model [subject prediction function] to predict a blood glucose state where the patient-specific training data includes a plurality of blood glucose episodes that are correlated and/or annotated with event data (e.g., insulin intake events, food intake events, physical activity events, etc.) where the model is updated at different frequencies to train/refine [re-training] the ML models such that uses the updated data for reanalyzing and update the prediction model (Wexler: [0047], [0049-0050], [0065], [0075-0078], [0087-0089]). (f) predicting a blood glucose level for the test subject with the personalized subject prediction model Wexler disclose the patient-specific model can generate prediction in time series of blood glucose values (Wexler: [0089]). However, Wexler does not expressly disclose: meta-learning generating meta-learning error values training the meta-learning model by adjusting the meta-learned parameters of the model configuring the function with the meta-learned parameters to form a meta-learned function re-training the meta-learned model Kraus teaches wherein the task data sets comprise time series data, with a value for one time being an input value, and a value for a subsequent time being an output value; Kraus discloses time steps current input data at a time and transform the data into an output data for respective [subsequent] time step to update the module state where the new output a next or subsequent time becomes the current state (Kraus: [0054], [0069-0075], [0079-0081]). (2) generating meta-learning error values at least in part by comparing each predicted output value to the output value corresponding to the respective input value Kraus a meta-learning system comprising an inner function module adapted to compute a current input data [task data set comprising input values] and transform the input data into an output data [output values] where the meta-learning system comprising an error computation module, adapted to compute errors indicating mismatches between the computed output data and target values (Kraus: [0050], [0069]-[0070]) (3) training the biology model based at least in part on the meta-learning error values, wherein the training comprises adjusting parameters for the neural network of the biology model, Kraus discloses the computed errors are supplied to the state update module (SUM) adapted to update model parameters of the inner model function of the inner function computation module (IFCM) according to an updated state in response to the calculated error received from the error computation module (ECM) where the module is trained to minimize the errors (Kraus: [0052-0054], [0062], [0069], [0081-0082], [0096]) wherein the parameters are stored as meta-learned parameters after all task data sets have been processed by the meta-learning operation Kraus discloses the state update module (SUM) is learned to adjust the model parameters of the model function such that using the data as training data for a following model function computation processed by the function computation module (Kraus: [0054], [0064], [0069], [0086]) (c) training a meta-data model, at least in part by configuring the neural network of the biology model with the meta-learned parameters; Kraus discloses choosing parameters by trained optimizer to form function parameters (Kraus: [0055], [0070], [0096]) at least in part by re-training the meta-learned model Kraus discloses a subsequent training phase(s) of the meta-learning system (Kraus: [0054], [0069], [0089]) wherein the re-training comprises adjusting meta-parameters of the meta-learned model to achieve a meta-error target of the meta-learned model; Kraus discloses in a subsequent training phase following the learning phase the state update module (SUM) is learned to adjust the model parameters of the model function and train the module to minimize the error calculated and received from the error computation module (ECM) [achieve meta-error target] (Kraus: [0054], 0052-0054], [0062], [0069], [0081-0083], [0096]). Wexler discloses a predictive model trained to predict a blood glucose that effect balance of biological features based on changes in blood glucose levels using inputs into a model to minimize error value [0078] and update the model to predict outputs. Kraus discloses a meta-learning system with a biological system to feed inputs to the meta-learning. Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have Wexler to incorporate the meta-learning and configuring models using meta-learning parameters, as taught by Kraus which helps minimizing overall prediction error (Kraus: [0003]). Regarding Claim 3 (Canceled). Regarding Claim 4 (Canceled). Regarding Claim 5 (Previously Presented), the combination of Wexler and Kraus teaches the method of claim 2, wherein the task data sets comprise food consumption events Wexler discloses input data for prediction(s) can include a meal data values at (Wexler: [0065]-[0066], [0068]). Regarding Claim 6 (Canceled). Regarding Claim 7 (Previously Presented), the combination of Wexler and Kraus teaches the method of claim 2, wherein the task data sets comprise data for different populations Wexler discloses input data for prediction(s) can include data from plurality of different patients (Wexler: [0065] [0071]). Regarding Claim 8 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 2, wherein the test subject data set comprises time series data, with a value for one time being an input value, and a value for a subsequent time being an output value corresponding to the input value Wexler discloses input data for prediction(s) can include obtain values for time periods preceding to prediction time period (Wexler: [0066], [0069]). Regarding Claim 9 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 2, further comprising predicting at least one blood glucose level for the test subject with the personalized subject predication model Wexler discloses input data for prediction(s) can include blood glucose level values (Wexler: [0065]-[0067], [0073]). Regarding Claim 10 (Previously Presented), Wexler a method comprising: the claim limitations is/are analogous to the limitations in Claim 2. As such, claim 10 is/are rejected for substantially the same reasons given for claim 2, and is incorporated herein. Regarding Claim 11 and 14, the claims limitations is/are analogous to the limitations in Claim 5 and 7 As such, claims 11 and 14, is/are rejected for substantially the same reasons given for claim 5 and 7 and is incorporated herein. Regarding Claim 12 (Previously Presented), the combination of Wexler and Kraus teaches the method of claim 10, wherein the task data sets comprise physical activities Wexler discloses input data for prediction(s) can include exercise data (Wexler: [0068]). Regarding Claim 15 (Previously Presented), the combination of Wexler and Kraus teaches the method of claim 10, wherein predicting the predicted behavior comprises a plurality of behaviors Wexler discloses predicting behavioral data such as sleep, medication, exercise, food intake (Wexler: [0021], [0036], [0042], [0057]). Regarding Claim 16 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 10, the predicted behavior comprises at least one food consumption event Wexler discloses predicting behavioral data such as sleep, medication, exercise, food intake (Wexler: [0021], [0036], [0042], [0057]). Regarding Claim 17 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 10, wherein the predicted behavior comprises at least one physical activity Wexler discloses predicting behavioral data such as sleep, medication, exercise, food intake (Wexler: [0021], [0036], [0042], [0057]). Regarding Claim 27 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 2, wherein the neural network comprises a long short-term memory network (LSTM) Wexler discloses the machine learning models for forecasting a patient state includes a long short-term memory network (LSTM) (Wexler: [0049], [0080]). Regarding Claim 28 (Currently Amended), the combination of Wexler and Kraus teaches the method of claim 10, wherein neural network comprises a long short-term memory network (LSTM) Wexler discloses the machine learning models for forecasting a patient state includes a long short-term memory network (LSTM) (Wexler: [0049], [0080]). Response to Amendment/Argument Applicant's arguments filed 09/09/2025 have been fully considered by the Examiner and addressed as the following: In the remarks, Applicant argues in substance: Applicant's arguments with respect to the 35 U.S.C. § 101 rejection on page 9-13. In response to the Applicant amendment and argument, Examiner withdraws the 101 rejections. The claims as a mended recite steps for training a model via meta-learning to generate a predication model for a subject behavior. The claims are directed to significantly more where the training of the model is based on analyzing meta-error to minimize the error and retrain the model by adjusting the meta-parameters. Applicant's arguments with respect to the 35 U.S.C. § 103 rejection on page 13-15. On page 14 of the remarks, Applicant argued “Regarding claim 2, Wexler and Kraus fail to disclose at least the following combination of limitations: ... value for a subsequent time being an output value ... neural network ... a value for one time of the time series data is processed as an input value to the neural network and another value for a subsequent time of the time series data is produced as an output value of the neural network ... (d) acquiring test subject data from the test subject ... wherein the test subject data was not used for performing the meta-learning operation ... (e) training a personalized subject prediction model ... re-training the meta-learned model ”, Examiner respectfully disagree. Although the Applicant argument is/are directed to the claim limitations as amended which include new features, Examiner finds that the combination of Wexler and Kraus, under BRI, teach the argued new features. For Example, value of subsequent time as an output is described in Wexler, see at least [0064-0066], [0115], describing generating outputs of machine learning models based on inputs in a time series. Also, Wexler discloses using machine learning models that include neural networks to perform the prediction in time series, see at least [0080]. Furthermore, the feature for a test subject data was not used for performing the learning operation is described in Wexler, see at least [0078-0080] where Wexler describes a population model to train the machine learning model in addition describing a patient-specific model that uses a single patient data that is a test subject data that is not part of the population to perform new prediction and train the machine learning model accordingly creating a personalized predication model. Finally, the feature for re-training the meta-learning model is described in Kraus, see at least ([0054], [0062], [0069], [0081-0083])., describing a subsequent training phase(s) of the meta-learning system and adjusting the model parameters to minimize error. On page 15 of the remarks, Applicant argued “Wexler and Kraus fail to teach or disclose a two-part meta-learning procedure, which includes (1) preparing a meta-learned blood glucose prediction model based on a dataset acquired from a plurality of subjects, and then (2) applying the meta-learned blood glucose prediction model to construct a personalized blood glucose prediction model based on newly acquired data from a test subject.”, Examiner respectfully disagree. As described above, the combination of Wexler and Kraus teaches, under BRI, the claim features as amended. For example, Wexler teaches using a trained machine learning (ML) model trained on glucose data from a plurality of subjects and output predication and also describes a new prediction to be made using a patient-specific model where the ML model is using a new and single patient data to prepare for new prediction. Kraus on the other hand, disclose a meta-learning process using an input data and provide output and re-train the system to minimize error. Therefore, in response to applicant's arguments against the references individually, one cannot show nonobviousness by arguing references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413,208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Hence, the Examiner find the Applicant argument(s) is/are unpersuasive. Prior Art Cited but not Applied The following document(s) were found relevant to the disclosure but not applied: US 2021/0104173 “Pauley” discloses using health metrics to determine health recommendations. US 2019/0379589 “Ryan” discloses detecting patterns in data from a time-series using meta-learning architecture US 2022/0039756 “Mikhno” discloses a supervised machine learning model and adjusting parameters to generate an optimized personal model of a user that estimates blood glucose values for the user by mapping the received data to the sequence of estimated blood glucose values for the user. The references are relevant since it discloses evaluating biological state based on collected biological data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAAELDIN ELSHAER whose telephone number is (571)272-8284. The examiner can normally be reached M-Th 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MAMON OBEID can be reached at Mamon.Obeid@USPTO.GOV. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAAELDIN M. ELSHAER/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

May 05, 2022
Application Filed
Jul 25, 2024
Non-Final Rejection — §103
Jan 30, 2025
Response Filed
Mar 05, 2025
Final Rejection — §103
Sep 09, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 21, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592315
APPARATUS, SYSTEM, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR DISPLAYING TRANSPORT INDICATORS ON A PHYSIOLOGICAL MONITORING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12537083
SYSTEMS AND METHODS FOR REGULATING PROVISION OF MESSAGES WITH CONTENT FROM DISPARATE SOURCES BASED ON RISK AND FEEDBACK DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12525337
METHOD AND APPARATUS FOR SELECTING MEDICAL DATA FOR ANNOTATION
2y 5m to grant Granted Jan 13, 2026
Patent 12499999
SYSTEMS AND METHODS FOR TARGETED MEDICAL DOCUMENT REVIEW
2y 5m to grant Granted Dec 16, 2025
Patent 12424338
TRANSFER LEARNING TECHNIQUES FOR USING PREDICTIVE DIAGNOSIS MACHINE LEARNING MODELS TO GENERATE TELEHEALTH VISIT RECOMMENDATION SCORES
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
67%
With Interview (+31.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month