Prosecution Insights
Last updated: April 19, 2026
Application No. 18/567,875

INFORMATION PROCESSING DEVICE

Final Rejection §101§102§103§DP
Filed
Dec 07, 2023
Examiner
RUIZ, JOSHUA DAMIAN
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 7 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
41 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The status of the claims as of the response filed 10/03/2025 is as follows: Claims 1-16 are pending. The applicant has amended Claims 1-16 are amended and have been considered below. Information Disclosure Statement The information disclosure statements (IDS) submitted on 07/22/2025, 07/25/2025 are in accordance with the provisions of 37 CFR 1.97 and are considered by the Examiner. Response to Arguments Double Patenting Rejections Applicant’s arguments, see page 1, filed 10/03/2025, with respect to Claims 1-16 have been fully considered and are persuasive. The provisional nonstatutory double patenting rejections are withdrawn. The Applicant asserts that the filing of a Terminal Disclaimer renders the provisional nonstatutory double patenting rejections moot without admitting to the merits of the rejection. The Examiner agrees and withdraws the rejection because the Applicant filed a complaint Terminal Disclaimer in accordance with 37 CFR 1.321(c). Consequently, the provisional nonstatutory double patenting rejections are withdrawn. Claim Rejections - 35 USC § 102 Applicant’s arguments, see pages 14-15, filed 10/03/2025, with respect to amended Claim 1 have been fully considered and is persuasive. The rejection of Claim 1 under 35 U.S.C. § 102(a)(1) as anticipated by Nelson is withdraw. Applicant argues that Nelson fails to disclose generating aggregated data pairs (X', y') where y' indicates if a subset output is identical to the full output, or training a binary determination model on such data. Examiner respectfully agrees that Nelson does not explicitly disclose the specific (X', y') data structure for identity verification; however, the rejection has been updated to 35 U.S.C 103. 103 because Vairavan teaches this missing feature. The 102 rejection is withdrawn, but the argument does not overcome the new 103 rejection. As detailed in the New Rejection, Nelson teaches the core AI architecture, while Vairavan (WO2017077414A1) teaches optimizing such systems by "pruning" features using binary validation to ensure a subset yields identical performance to the full set (Vairavan, p. 7, 14-17). It would be obvious to a skilled artisan to combine Vairavan's efficient data validation techniques (X', y') with Nelson's care matrix to reduce patient data collection burdens (KSR Int'l Co. v. Teleflex Inc.). Refer to claim 1 35 U.S.C 103 rejection for more details. Applicant argues that Nelson does not teach determining a "required number" of priority types across multiple humans based on outputs of a binary determination model. Examiner respectfully disagrees because Nelson discloses determining a required set of features (goals/symptoms) by applying a "discriminator" (binary determination logic) that selects items exceeding a "goal weight threshold" or ranking (Nelson Col. 6, lines 40-54; Col. 16, ll. 14-21, ll. 53-67,). Under the Broadest Reasonable Interpretation (MPEP 2111), selecting a specific count of items (e.g., "top three") or all items meeting a binary threshold constitutes determining a "required number" based on model outputs. The limitations are thus met by Nelson's threshold-based selection logic, which functions as a binary filter to establish the necessary set of priority features. Applicant argues that Nelson does not teach resetting earlier period priority types by inserting types from a latter period at a position corresponding to the required number. Examiner respectfully disagrees because Nelson explicitly teaches updating the "Care Matrix" (model) and training data for an initial evaluation (earlier period) based on the effectiveness of treatments determined in "later evaluations" (latter period) (Nelson Col. 8, lines 39-58; Col. 3, lines 9-20). This feedback loop effectively "resets" the priority by inserting effective features (hint phrases/goals) into the model for future initial evaluations. Regarding the "position corresponding to the required number," Nelson's logic of ranking and selecting the "top" entities (e.g., top 3) inserts these validated features into the qualifying position (the required number set) to ensure they are selected in future analyses (MPEP 2114). Applicant argues that Nelson fails to disclose outputting acquisition-instruction data to a user terminal to cause acquisition of priority types for a subsequent period. Examiner respectfully disagrees because Nelson discloses an interface engine that outputs specific questions (acquisition instructions) to a user via a computing device to obtain a "description of the current state" (Nelson Col. 11, lines 1-15; Fig. 1). These questions are dynamically selected based on the priority types (domains, symptoms) identified by the AI, thereby causing the acquisition of the specific prioritized feature values for the new period (subsequent elapsed period). The claim language "outputting acquisition-instruction data" reads on Nelson's prompting of users for specific patient information based on the updated model requirements. 35 U.S.C. § 101 Subject Matter Eligibility Applicant's arguments, see pages 12-13, filed 10/03/2025, with respect to amended Claims 1-16, are fully considered and are not persuasive. Applicant argues that claim 1 is eligible because the specific control flow—generating identity-encoded aggregated data (X', y'), training a binary determination model, and resetting priorities—constitutes a specific technical architecture that provides a technological improvement in data-collection efficiency. Examiner respectfully disagrees because the claimed "specific technical architecture" consists of mathematical algorithms (training models, calculating required numbers) and logical data organization executed on generic hardware (processor, memory), which fall squarely within the "Mental Process" exception (MPEP 2106.04(a)(2)). The alleged "technological improvement" in "data-collection efficiency" is an improvement to the abstract idea of selecting feature values itself—akin to a doctor optimizing a diagnosis checklist—rather than an improvement to the functioning of the computer as a tool (MPEP 2106.05(a); Electric Power Group). Furthermore, limiting the abstract idea to a specific technological environment (data collection for human conditions) or adding insignificant extra-solution activity (outputting instructions to a user terminal) does not amount to "significantly more" than the abstract idea (MPEP 2106.05(h); Alice Corp.). Refer to rejection 35 U.S.C 101 below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16 are rejected under 35 U.S.C. § 101 because the claimed subject matter is directed to a judicial exception (an abstract idea) without reciting elements that integrate the exception into a practical application or provide an inventive concept amounting to significantly more than the exception itself. Step 1: Statutory Categories Analysis The claims are directed to statutory subject matter, encompassing the following statutory categories: Machine (Claims 1-8): The language reciting "An information processing device comprising: at least one memory... and at least one processor" describes a concrete thing consisting of parts, aligning with the definition of a machine in MPEP § 2106.03. Process (Claims 9-15): The language reciting "An information processing method comprising: acquiring... collecting... setting... outputting" defines a series of acts or steps, aligning with the definition of a process in MPEP § 2106.03. Manufacture (Claim 16): The language reciting "A non-transitory computer readable storage medium storing thereon a program" describes a tangible article given a new form through artificial efforts, aligning with the definition of a manufacture in MPEP § 2106.03. Having confirmed the claims are directed to statutory subject matter, the analysis proceeds to Step 2A Prong One. Step 2A, Prong One: Judicial Exception Analysis Step 2A, Prong One determines whether the claims recite a judicial exception, such as an abstract idea. The whole invention is related to optimizing the selection of patient data features for a treatment model by comparing outputs from full data sets versus partial data sets and re-ordering feature priorities based on statistical agreement. (See Spec., para. [0007]-[0008]). More specifically, the claims 1-16 are directed to a judicial exception because they recite the mental process of analyzing data outputs to determine feature importance and reorganizing a priority list based on those determinations. Under MPEP § 2111, the claims broadly cover a logical process of data evaluation and rule-based reordering that can be performed in the human mind. Independent Claims Analysis Claim 1. An information processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute instructions to: acquire a model that is generated for each elapsed period, and has learned by machine learning to output a measure for a human by receiving input of a plurality of types of feature values representing a condition of the human; collect first output that is obtained when a predetermined number of types of feature values are input to the model of each elapsed period, and second output that is obtained when some types of feature values in the predetermined number of types of feature values are input to the model of each elapsed period; and set, on a basis of the first output and the second output, types to be associated with the model of each elapsed period by performing operations comprising: generating, for each elapsed period, aggregated data including a pair (X', y') where X' identifies a varied subset of the types of feature values and y' indicates whether a second output obtained with the subset is identical to the first output; training, for each elapsed period, a binary determination model using the aggregated data; determining, across multiple humans, a required number of priority types of feature values based on outputs of the binary determination model; resetting priority types of feature values associated with a model of an earlier elapsed period by inserting, based on the required number and places in an order of priority associated with a model of a latter elapsed period, one or more types of feature values from the latter elapsed period at a position corresponding to the required number; and outputting acquisition-instruction data to a user terminal to cause acquisition of the priority types of feature values for a subsequent elapsed period. Note: The bolded portions represent additional elements evaluated in Prong Two and Step 2B. The non-bolded portions represent the abstract idea. Claim Abstract Classification & Rational Under their Broadest Reasonable Interpretation (MPEP § 2111), the independent claims 1, 9, and 16 abstract idea recite a process of acquiring models, comparing mathematical outputs, training a secondary determination logic, and re-ordering a list of feature values based on the comparison results. This process aligns with the following abstract idea categories: Mental Process (MPEP § 2106.04(a)(2)(III)): The independent claims recite a sequence of cognitive steps including "collect[ing] first output... and second output," "generating... aggregated data," "training... a binary determination model," "determining... a required number," and "resetting priority types." These limitations describe observation, evaluation, and calculation concepts performed in the human mind. Specifically, the "training" and "determining" steps involve calculating variables and statistical values (e.g., averages or modes) to reach a binary conclusion (0 or 1)—mathematical operations a human can practically perform using pen and paper. Similarly, "resetting priority types" describes the logical task of reorganizing a list based on those calculated results. The specification confirms this non-technical nature, stating: "The feature value type setting unit 13 learns the relationship... determines the value (0 or 1)... checks the number... and thereby calculates a required number (spec, 0048)". This demonstrates the invention is a logical process of data evaluation and calculation rather than a technological function. Manual Replication Scenario (Human Equivalence) The abstract nature of the claims is reinforced because the entire process is analogous to fundamental human activities: A doctor could look at a patient's full medical chart (features) and make a diagnosis (first output), then look at a partial chart and make a second diagnosis (second output). The doctor could mentally note (generate aggregated data) whether the diagnoses matched (y'=1 or 0). After reviewing many patients, the doctor could determine the minimum number of features needed (determining required number) and rewrite their checklist for future exams to prioritize those features (resetting priority types/outputting instruction). Dependent Claims Analysis The dependent claims 2-15 are also directed to an abstract idea. Claims 2-8, 10-15: These claims recite under BRI variations in data selection ("varied set"), mathematical comparisons ("identical to the first output"), and logical rules for organization ("order of priority," "intermediate position"). This describes mathematical relationships and mental processes of evaluation. Because the claims recites a judicial exception, the analysis proceeds to Step 2A, Prong Two. Step 2A, Prong Two: Integration into a Practical Application Step 2A Prong Two determines whether the claim elements integrate the judicial exception into a practical application by imposing meaningful limits, rather than merely using a computer as a tool. The claims' additional elements do not overcome Prong Two. Evaluation of Independent Claims 1, 9 and 16 Additional Elements The additional elements are generic computer components ("processor," "memory") used to execute mathematical and mental steps ("training," "determining"). Per MPEP § 2106.05(f), "mere instructions to implement an abstract idea on a computer" do not integrate the exception. The "user terminal" is mere data gathering/outputting (MPEP § 2106.05(h)). When viewed as a whole, the combination of these elements does not integrate the abstract idea. The claim describes a generic arrangement of hardware performing the abstract analysis and re-ordering of data. This generically implemented workflow does not transform the abstract idea into a specific eligible application but rather automates the mental task of feature selection. Conclusion: Because the claims are directed to an abstract idea without integrating it into a practical application, the analysis proceeds to Step 2B. Step 2B: Inventive Concept Analysis Step 2B determines whether the additional elements, individually or in combination, provide an inventive concept that amounts to "significantly more" than the judicial exception. The elements here represent well-understood, routine, conventional activities or generic computer functions. Evaluation of Independent Claims 1, 9 and 16 Additional Elements The hardware elements (processor, memory, terminal) are admitted by the applicant to be "typical" (Spec., para. [0060]), failing the inventive concept requirement under MPEP § 2106.05(d). The output to a user terminal is mere post-solution activity (data output) per MPEP § 2106.05(g)/(h), which does not add significantly more. When viewed as a whole, the combination of additional elements is not enough. The claim amounts to running a new mathematical calculation (the binary model/priority sorting) on a computer. As established in Alice Corp., simply implementing an abstract idea on a generic computer does not transform it into patent-eligible subject matter. The claims are directed to an abstract idea and lack an inventive concept. Therefore, Claims 1-16 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 6, 8-11, 13, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nelson US11355239B1, and further in view of Vairavan's- WO2017077414A1. Claim 1 Nelson teaches, An information processing device comprising: at least one memory configured to store instructions; (Nelson, Column 5, lines 20 – 26) and at least one processor configured to execute instructions to: acquire a model that is generated for each elapsed period, and has learned by machine learning to output a measure for a human by receiving input of a plurality of types of feature values representing a condition of the human; (Nelson, abstract, Column 3 lines 53-60, Column 5 lines 25-30, Column 8 lines 39-51, Column 10 lines 32-67) Nelson discloses “training the AI ... to obtain a trained AI capable of generating a treatment profile” using “a description of the current state of the patient”. The "current state" aligns with “elapsed period,” representing temporally contextual patient data also the system is updated by different period evaluation (initial...later... and final evaluation... daily profiles). The AI model uses “a set of input phrases” derived from the patient’s condition description, which corresponds to the “feature values.” The output is a “treatment profile,” a measurable result used for medical decision-making. collect first output that is obtained when a predetermined number of types of feature values are input to the model of each elapsed period, and second output that is obtained when some types of feature values in the predetermined number of types of features value are input to the model of each elapsed period; (Nelson, abstract, Column 2, lines 1-35, Column 6, lines 15 – 25, Column 18, lines 1 – 10, Column 13 – 31,) Nelson’s system generates a “treatment profile” from a “first set of input phrases” (full input) and separately generates outputs from “symptom AI nodes” using subsets of those phrases (partial inputs). and set on a basis of the first output and the second output, types to be associated with the model of each elapsed period. (Nelson, Column 6, lines 40 –50, Column 4, lines 30 – 40, Column 17, 45 – 55, fig. 15, Column 18, lines 1-10) Nelson describes computing an “aggregated score” from model outputs (scores from AI nodes) and selecting treatment goals if they exceed a threshold. Each goal corresponds to specific input phrases (feature values). set, on a basis of the first output and the second output, types to be associated with the model of each elapsed period by performing operations comprising ( Nelson, Col. 2, ll. 1-17, Col. 8, ll. 39 – 58) Nelson describes a configuration process where the parameters of the AI system are defined and updated based on a comparison of results generated at different temporal stages of patient care. Nelson describes a process where the "Care Matrix data structure" (model associations) is "adjusted" (set) based on "aggregated data" derived from comparing "initial evaluations" (first output) and "later evaluations" (second output) across a population. Since Nelson explicitly links the definition of "training phrase associations" (types) to the "outcomes" determined over "initial" and "later" timeframes (elapsed periods), the prior art functionally anticipates the limitation of configuring the model based on multi-stage output analysis. generating, for each elapsed period, aggregated data including a pair (X′, y′) where X′ identifies a varied subset of the types of feature values and y′ indicates whether a second output obtained with the subset is ( Nelson, Col. 2, ll. 1-15, Col. 4, ll. 10-35, Col. 8, ll. 39 – 58) Nelson discloses generating training data structures that link specific subsets of input phrases (X' - varied subset) with associations or scores (y'). In the context of Nelson's "discriminator" and "selection" logic, the score or association strength serves as the indicator (y') of whether the subset is sufficient to identify the target goal (identical result). If the association strength leads to a score exceeding the threshold, the result is treated as valid (identical to the desired outcome). By defining these "training phrase associations" and "scores" which dictate selection or omission, Nelson generates the required paired data structure for the model's configuration. training, for each elapsed period, a binary determination model using the aggregated data; (Nelson, fig. 10, fig. 12, Col. 16, ll. 10-19, 53-64, Col. 8, ll. 39 – 58, Col. 10, ll. 41 - 51) Nelson describes the use of AI nodes combined with a discriminator function that processes the aggregated data (scores and associations). This combination functions as a binary determination model because it takes the probabilistic output derived from the training data and forces a binary decision: "select" or "omit." By training the AI nodes with the phrase associations to produce scores that are subsequently subjected to this binary threshold logic, Nelson discloses training a system to make binary determinations based on the aggregated data. Determining, across multiple humans, a required number of priority types of feature value based on outputs of the binary determination model (Nelson, Col. 9, lines 47-50; Col. 16, lines 14-19; Claim 14). Nelson describes an automated care system that aggregates data "across multiple patients" (across multiple humans) to train an AI. The system determines a "required number" of priority items (e.g., the "top three ranked" child-entities or those meeting a criteria) by applying a "discriminator" (binary determination model) that evaluates probabilities against a threshold (e.g., "p > 60%" or "exceed a goal weight threshold") to essentially output a binary decision to select or omit specific feature values. resetting priority types of feature values associated with a model of an earlier elapsed period by inserting, based on the required number and places in an order of priority associated with a model of a latter elapsed period, one or more types of feature values from the latter elapsed period at a position corresponding to the required number; (Nelson, Col. 8, ll. 39 – 58, Col. 3, ll. 9-20, Col. 10. Ll. 35-52, Col. 12; col. 16 ll. 17-19) Nelson describes an automated care system that updates patient treatment by comparing an "initial evaluation" (model of an earlier elapsed period) with "later evaluations" (latter elapsed period). The system resets priority types (probability weights/ranks) of symptoms and goals by obtaining a "new description" (inserting feature values from the latter period) and utilizing a "discriminator" to select and place the most relevant entities into the "top three ranked" slots (position corresponding to the required number), thereby generating a "new therapy profile" that reflects the patient's current state. outputting acquisition-instruction data to a user terminal to cause acquisition of the priority types of feature values for a subsequent elapsed period. (Nelson, Col. 11, ll. 1-15; Col. 10, ll. 58-67; Col. 18, ll. 58-67;) Nelson describes an interface engine that outputs specific questions (acquisition-instruction data) to a user via a computing device. These questions are not random; they are selected based on the patient's known domains, symptoms, and goals (priority types) derived from the AI's analysis. By posing these targeted questions to the user to obtain a "description of the current state" for a new day (subsequent elapsed period), Nelson discloses outputting instructions to cause the acquisition of the prioritized feature values. Obviousness Rational: Nelson teaches generating, for each elapsed period, aggregated data where "AI nodes" process "input phrases" (feature values) to produce scores that are evaluated by a "discriminator" to select or omit treatment goals (Col. 16, ll. 53-64; Col. 2, ll. 1-15). However, Nelson fails to disclose aggregated data including a pair (X′, y′) where ... y′ indicates whether a second output obtained with the subset is identical to the first output. Vairavan teaches the Missing Element in bold, describing a "pruning" process where a subset of features is evaluated to see if it yields a "binary (yes/no) determination" that matches the performance of the full set, stating that "input features having the least impact on the accuracy... may be omitted" (Page 7, ll. 5-10) and confirming that the reduced subset offers "comparable prediction performance" (Page 17, ll. 10-15). It would have been obvious to combine Nelson with Vairavan because both references explicitly seek to optimize these systems to improve efficiency and reduce the burden of data collection. Nelson aims to effectively analyze "natural language" inputs (abstract, Col. 2, ln 55), while Vairavan explicitly addresses the problem where "obtaining other information may require specific tests... [or] be infrequently available" (Page 5, ln 10-15) and seeks to provide predictions "with substantially less patient information" (Page 6, ln 5). A POSITA would naturally look to Vairavan's data optimization techniques to improve the efficiency of Nelson's data-heavy AI training process. The combination makes obvious the generation of the claimed aggregated data (X', y') by applying Vairavan's feature validation logic to Nelson's system: creating data pairs where a subset of input phrases (X') is tagged with a binary indicator (y') confirming that the subset yields an identical treatment decision (output) to the full set of phrases, ensuring the "Care Matrix" is optimized. Applying Vairavan's feature validation logic to Nelson's system renders the claimed data pair (X', y') obvious. While Nelson selects runtime outputs via scores (Col. 6, ln 10-15), Vairavan optimizes models by verifying if a "subset" of inputs yields comparable results (Page 16, ln 20-25). A POSITA would employ this pruning to reduce the number of required input features" (Page 15, ln 5-10), generating X as the tested "subset" and y' as the binary confirmation that the subset's output is functionally identical to the full model's (Page 17, ln 10-15). This binary validation does not contradict Nelson's scoring but serves as a configuration step to determine necessary inputs. Consequently, using Vairavan's method to identify "input features... [to] be omitted" (Page 7, ln 5-10) represents a known technique improving similar devices (KSR), resulting in a more efficient "Care Matrix." A person of ordinary skill in the art would have been motivated to integrate the identical binary comparison (validation of subsets) from Vairavan into the system of Nelson to achieve the benefit of reducing the data collection burden on the patient while maintaining accuracy, as Vairavan teaches that this method allows the system "to provide a reasonably accurate prediction... with substantially less patient information than currently required" (Page 6, ll. 5-10). Applying this known technique to the analogous patient care AI system of Nelson predictably improves it in the same manner to achieve optimized data collection and efficient model training. Nelson in combination with Vairavan's teaches, Claim 2. The information processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to collect the second output that is obtained when each varied set of some types of feature values in the predetermined number of types of feature values is input to the model of each elapsed period. (Nelson, Column 12, lines 20-65, Column 3, lines 40-60, abstract, Column 2, lines 30 –45, column 14, lines 30-45, Column 13, lines 32-43) Nelson’s “output” of selected “input phrases” and their associations with goals and plans for each “current state of the patient” precisely and functionally matches “output information indicating the types associated with the model of each elapsed period” under the broadest reasonable interpretation and to include, for each varied set, in aggregated data, an indication as to whether the second output is identical to the first output. (Vairavan, Page 14, lines 23–26; Page 15, lines 5–10; Fig. 7). Nelson in combination with Vairavan's teaches, Claim 3 The information processing device according to claim 2, wherein the at least one processor is configured to execute the instructions to collect the second output that is obtained when a varied number and/or combination of a varied number and/or combination of some types of feature values in the predetermined number of types of features value is input to the model of each elapsed period. (Nelson, abstract, Column 17, lines 35-65, Column lines 12, lines 1-10, Column 11, lines 1 –15, Column 8, lines 52-60) Nelson disclosed, the second output that is the result of the AI model's analysis of varied combinations of feature values during each elapsed period. This output includes scored symptoms, proposed goals, and recommended therapies, which are dynamically updated to create a tailored therapy profile for the patient. Nelson in combination with Vairavan's teaches, Claim 4 The information processing device according to claim 2, wherein the at least one processor is configured to execute the instructions to: collect aggregated data including an indication as to whether or not the first output that is obtained from a model corresponding to an elapsed period is identical to the second output that is obtained when each varied set of some types of features value in the predetermined number of types of feature values is input to the same model corresponding to the elapsed period, and set on a basis of the aggregated data, types to be associated with the model of each elapsed period. (Nelson, Column 5, lines 1-15, figure 15, Column 3, lines 1-20) The Nelson prior art teaches obtaining descriptions, processing them through an AI trained with various input associations, aggregating the probabilistic scores (indication) derived from these inputs (which involves comparing outputs), and then generating a patient therapy profile (setting the relevant "types" of care, symptoms, goals, and therapies) based on these aggregated scores. The score could be a number for example and the threshold a limit number and if comparison of two same scored inputs are identify below of the threshold by comparison, then will be omitted. Therefore, are identical in relevance. and treat, for generation of the aggregated data, feature value types not included in a varied subset as having a zero input to the model. (Nelson, Col. 3, ll. 1-30) Nelson, describe that goal (feature value types) is not used if does not exceed the threshold. Nelson in combination with Vairavan's teaches, Claim 6 The information processing device according to claim 1, wherein types that are given places in an order of priority in advance are set in association with the model of each elapsed period, wherein and the at least one processor is configured to execute the instructions to reset the types on a basis of the places in the order of priority and the required number of the types associated with the model of each elapsed period including using places in the order of priority of a latter elapsed period to determine insertion into the earlier elapsed period. (Nelson, Column 12, lines 24-55, Column 16, lines 30-67, Column 3, lines 10-20, Column 11, lines 1-10, Column 18, line 20-35, Col. 8, ll. 42-57, Col. 8, ll. 39-55; Col. 25, ll. 60-67; Col. 26, ll. 1-10) Nelson's system "aggregates the scores" for different goals. Then, it "selecting the first goal and the third goal when the aggregated score and the fourth score exceed a goal weight threshold and omitting the second goal when the second score does not exceed the threshold". This "selecting" and "omitting" process effectively "resets the types" (goals and therapies) by picking only the most relevant ones (those with high scores that pass the "required number" set by the threshold) from their "order of priority" in the matrix. This happens for "each elapsed period," as a patient's plan can "differ from day to day" . Nelson describes a feedback process where the system evaluates "outcomes" by comparing an "initial evaluation" (earlier elapsed period) with "later evaluations" (latter elapsed period). If a specific therapy or goal proves effective in the latter elapsed period (indicating a high place in the order of priority or effectiveness), the system uses this data to "adjust" the "Care Matrix" (model). This adjustment involves updating the training data (e.g., adding "hint phrases") to ensure that the effective type is identified in future iterations of the earlier elapsed period analysis. Therefore, Nelson discloses using the validated priority/effectiveness found in the latter elapsed period to determine the modification (insertion) of data associations into the model configuration used for the earlier elapsed period. Nelson in combination with Vairavan's teaches, Claim 8 The information processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to: generate, for each elapsed period and a binary determination that receives as input an identifier of a varied subset of the types of feature values, and outputs an indication as to whether the first output and the second output are identical; (Nelson, Col. 18, ll. 1-16, 55-67, Col. 2, ll. 50-67; Col. 16, ll. 10-25, 50-67; Fig. 10; Fig. 11, Fig. 12, fig 15) Nelson describes a processing logic that evaluates subsets of data to render a definitive decision on their validity or relevance. Since Nelson discloses generating AI node instances for specific periods (elapsed periods) that receive input phrases (identifiers of varied subsets) and process them through a discriminator to output a decision to select or omit a goal based on whether a probability score exceeds a threshold (indication of identity/validity), the prior art describes a binary determination model processing varied subsets. determine the required number based on outputs of the binary determination model. (Nelson, Column 4, lines 20-45, Column 6, lines 35 – 53, Column 11, lines 1-10, col. 16, 10-37, Col. 3, ll.1-26, Col. 18, ll. 1-16, 55-67, Col. 28. Ll.35-47, claim 14, fig 15) Nelson describes a patient care system that utilizes a "discriminator" (binary determination model) to evaluate analysis results against specific logic, such as a probability threshold of "p > 60%" or a "goal weight threshold." The system uses the binary outputs of this evaluation (e.g., "selecting" those that exceed the threshold versus "omitting" those that do not) to ascertain the specific set and count of items, such as the "top three ranked identified child-entities" (determine the required number), that are included in the final therapy profile. Note: Claims 9,15-16 and 10-11, and 13 are rejected with the same analysis of claims 1-4, 6 and 8 for being very similar. Claim(s) 5, and 7, and 12, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nelson US11355239B1, and further in view of Vairavan's- WO2017077414A1 and further in view of US11789837 – Jain. Nelson in combination with Vairavan's teaches, Claim 5 The information processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to set on a basis of the aggregated data, a required number of types to be associated with the model of each elapsed period, and sets set the types on a basis of the required number the required number being (Nelson, Column 3, lines 1-25, Column 5, lines 1-25) Nelson showing that an AI aggregates input scores to "select" goals when they "exceed a goal weight threshold", thereby determining a "required number" of types. Subsequently, a "treatment profile is generated" containing these goals and associated therapies, effectively "setting the types on a basis of the required number" for each "elapsed period". Nelson, uses score threshold in goals to decide what inputs uses. However, Nelson fails to explicitly disclose that the required number [is] an average, a minimum, or a mode of numbers of the feature value types for which the binary determination model outputs that the second output is identical to the first output across the multiple humans. Jain teaches the Missing Element in bold, describing the use of statistical power to prune and optimize data inputs (Jan, Col. 244, ll. 55-67,Col. 245, ll. 1-11, Col. 21, ll. 54-67, Col. 267, ll. 1-17 ). Jain teaches determining "aggregate measures" such as "averages (e.g., mean, median, etc.)" to evaluate group performance (Jan, Col. 197, ll. 44-54). Crucially, Jain describes using these population statistics to prune inputs, stating that the system can "limit... monitoring... to only those [inputs] that will produce useful results" (Col. 21, ll. 44-67) by setting "minimums or other thresholds... based on general reference levels, such as... average... characteristics for a large population" (Jain, Col. 267, ll. 1-14). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Nelson with Jain because both references describe automated systems that analyze aggregated population data to optimize the selection of inputs (Nelson, Col. 18, ll. 30-40; Jan, Col. 197, ll. 44-54). A PHOSITA would have modified Nelson's threshold-based selection logic to calculate the "goal weight threshold" (the required number) as the average (or mode/minimum) of the feature counts observed across "multiple humans," as taught by Jain's method of using "aggregate measures" and "averages" to set sufficiency thresholds (Jan, Col. 197, ll. 44-54). This combination applies Jain's statistical logic to Nelson's system to ensure the "required number" of types is derived from the actual statistical power of the population data, rather than an arbitrary value. A person of ordinary skill in the art would have been motivated to integrate the average/mode calculation from Jain into the system of Nelson to achieve the benefit of improving system efficiency by pruning unnecessary inputs, as Jain teaches that utilizing statistical averages to determine sufficiency allows the system to "limit... monitoring... to only those... that will produce useful results," thereby "reducing computational burden" and optimizing the input list for the model (Jain, Col. 93, ll. 1-13). Furthermore, the proposed combination is obvious under the flexible approach mandated by KSR because it represents Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The system of Nelson was recognized as ready for improvement to address the limitation of accurately defining its selection thresholds for input pruning. Applying the known technique of using statistical averages to set sufficiency thresholds from Jain to Nelson's goal selection system yields the predictable result of a refined list of inputs optimized based on population statistics. Nelson in combination with Vairavan's in further view of Jain teaches, Claim 7 The information processing device according to claim 6, wherein the at least one processor is configured to execute the instructions to reset, on a basis of the places in the order of priority and the required number set for the types associated with a model of an earlier elapsed period in temporarily-consecutive elapsed periods, and the places in the order of priority and the required number set for the types associated with a model of a latter elapsed period, the types to be associated with the model of the earlier elapsed period the (Nelson, Column 8, lines 39-59, Column 11, lines 1-9, Column 4, lines 44-56, Column 10, 46 – 56, Col. 3, 1-25) Nelson's "care matrix data structure" provides a pre-defined "order of priority" for patient "types" like symptoms and goals. The system "resets" these types for "an earlier elapsed period" by "adjusting based on this aggregated data to improve future performance". This "aggregated data" includes insights from "later evaluations", ensuring the AI's understanding of patient needs evolves across "temporally-consecutive elapsed periods" to refine "treatment profiles" that "may differ from day to day". Nelson teaches the re-ranking and selecting goals based on aggregated scores (priority) to form an adjusted list (matrix), therefore put it in at position list corresponding to their rank/score. However, Nelson fails to explicitly disclose the inserting being performed at an intermediate position between a first and a second priority feature value type corresponding to the required number. Jain teaches the Missing Element in bold, describing a "Prioritization Module" that generates "Rankings" (e.g., "Group 1 (Rank 1)... Group 2 (Rank 2)... Group 3 (Rank 3)") based on calculated "Priority Scores," where an item with a second-highest score is placed at an intermediate position between a first (Rank 1) and a second (Rank 3) priority item (Col. 175, ll. 45-60; Fig. 19C). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Nelson with Jain because both references describe systems for optimizing patient monitoring and care through data analysis and prioritization, as Nelson teaches "selecting" treatment goals when an "aggregated score... exceed[s] a goal weight threshold" (Col. 6, ll. 40-54) and Jain teaches analyzing data to "identify which groups... should be prioritized for particular actions... and/or allocation of limited resources" (Col. 170, ll. 15-24). A PHOSITA would have looked to Jain to improve Nelson's "Care Matrix" update process by replacing binary threshold selection with Jain's ordered ranking, enabling the system to organize treatment goals by relative importance. This combination results in the "resetting" process involving calculating a new score for a goal and inserting it into the ordered list of goals at the position dictated by that score (e.g., inserting a goal as Rank 2, an intermediate position between Rank 1 and Rank 3), thereby satisfying the limitation. A person of ordinary skill in the art would have been motivated to integrate the ranking and insertion logic from Jain into the system of Nelson to achieve the benefit of efficient resource allocation, as Jain teaches that this prioritization allows the system to "identify which groups... should be prioritized for particular actions... and/or allocation of limited resources" (Col. 170, ll. 15-24). Furthermore, the proposed combination is obvious under the flexible approach mandated by KSR because it represents Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The patient care system of Nelson was recognized as ready for improvement to address the limitation of binary goal selection (select/omit). Applying the known technique of ordered ranking and insertion from Jain (as evidenced by Fig. 19C) to this known system yields the predictable result of a prioritized list of care goals where items are inserted at positions corresponding to their priority scores. A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation, as sorting and ranking data items based on numerical scores is a well-understood data processing technique. Note: Claims 12, and 14 are rejected with claims 5, and 7 for being very similar. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA DAMIAN RUIZ whose telephone number is (571)272-0409. The examiner can normally be reached 0800-1800. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA DAMIAN RUIZ/Examiner, Art Unit 3684 /Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §101, §102, §103
Oct 03, 2025
Response Filed
Dec 11, 2025
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month