Prosecution Insights
Last updated: April 19, 2026
Application No. 18/420,890

INFORMATION PROCESSING DEVICE

Final Rejection §101§102§103§DP
Filed
Jan 24, 2024
Examiner
RUIZ, JOSHUA DAMIAN
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 7 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
41 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The status of the claims as of the response filed 10/2/2025, is as follows: Claims 1-12 are pending. None are canceled. The applicant has amended Claims 1-12 and they have been considered below. Information Disclosure Statement The information disclosure statements (IDS) submitted on 07/25/25 are in accordance with the provisions of 37 CFR 1.97 and are considered by the Examiner. Response to Arguments Provisional Double Patenting Rejections Applicant's arguments, see page 9, filed October 2, 2025, with respect to claims 1-12 regarding the provisional obviousness-type nonstatutory double patenting rejections over Application Nos. 18/419,976 and 18/567,875 are amended. Applicants elect to defer addressing the merits of these provisional rejections until the cited applications issue. The Examiner respectfully maintains the rejection because, while the applicant may elect to defer the filing of a terminal disclaimer, the ground of rejection remains applicable to the claims as amended. Claim Rejections - 35 USC § 101 Applicant's arguments, see pages 9-11, filed October 2, 2025, with respect to amended Claims 1-12 have been fully considered and are not persuasive. The applicant argues that Claim 1 recites an ordered combination that controls future data acquisition, thereby reducing collection burden while maintaining decision parity, which constitutes a "technological improvement to the computer-implemented workflow" rather than a mental process. Examiner respectfully disagrees because the alleged improvement is to the abstract idea itself (the efficiency of the data collection process), not to the functioning of the computer as a machine. The Examiner maintains that the invention improves the process of information gathering—specifically (Spec., para. [0006])—rather than improving the technical capabilities of the computer (e.g., speed, storage, or network bandwidth). According to MPEP § 2106.05(a), an improvement in the abstract idea (such as a more efficient business practice or mathematical model) does not constitute a "technological improvement" to the computer’s functionality. The computer in the claimed invention is merely used as a tool to execute the improved abstract process of statistical analysis and priority reorganization, which does not integrate the judicial exception into a practical application. Refer prong one for further details. The applicant argues that Claim 1 integrates the abstract idea into a practical application by reciting specific data structures (pairs (X', y')), a trained binary determination model, and explicit device actions (priority insertion, terminal instructions), which amount to "significantly more" than generic logic “compare and set”. Examiner respectfully disagrees because the cited "specific" structures and actions are themselves part of the judicial exception. The recitation of data structures like pairs (X', y') and the "binary determination model" falls within the Mental Process grouping of abstract ideas (MPEP § 2106.04(a)(2)(I)), as they describe mathematical relationships and calculations. Furthermore, the step of "resetting priority" based on these calculations constitutes a Mental Process (evaluation and judgment) merely performed on a computer (MPEP § 2106.04(a)(2)(III)). The "explicit device actions" of outputting instructions to a user terminal amount to mere data gathering or post-solution activity, which does not provide an inventive concept. As noted in MPEP § 2106.05(d), the use of generic, well-understood, routine, and conventional hardware (processor, memory) to perform these abstract steps does not amount to "significantly more" than the abstract idea itself. Claim Rejections - 35 USC § 102 Applicant's arguments, see pages 11-13, filed October 2, 2025, with respect to amended Claims 1-12 have been fully considered and are not persuasive. The applicant argues that Huskey does not teach the specific ordered combination of steps in Claim 1, particularly: generating aggregated data pairs (X', y'), training a binary determination model, determining a required number of priority feature-value types, resetting priorities by inserting features based on the required number and order of priority, outputting acquisition-instruction data, and receiving user modifications. Examiner Response The Examiner withdraws the rejection under 35 U.S.C. § 102 as it pertains to Huskey. Upon reconsideration, the Examiner agrees that Huskey does not explicitly disclose the specific limitations related to the generation of aggregated data pairs (X', y'), the training of a binary determination model using such pairs, and the specific mechanism for resetting priority feature-value types by inserting features based on a determined required number and order of priority as now claimed. However, the claims are rejected under 35 U.S.C. § 103 over Afshar in view of Chang, Itu, and Allassonniere, as detailed below. While Huskey may not anticipate the claims, the claimed invention would have been obvious in view of the combined teachings of these references, which disclose all the limitations argued by the applicant as missing from Huskey. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCРA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (ССРA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCРА 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applyingonline/eterminal-disclaimer. Claim 1-12 provisionally rejected on the ground of obviousness type nonstatutory double patenting as being unpatentable over independent claims of copending Application No. 18567875 and 18419976 (reference applications). Although the claims at issue are not identical, they are not patentably distinct from each other because: The subject matter claimed is not patentably distinct. Claim 1. An information processing device comprising: at least one memory configured to store instructions; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) and at least one processor configured to execute instructions to: acquire a model that is generated for each elapsed period, and has learned by machine learning to output a measure for a human by receiving input of a plurality of types of feature values representing a condition of the human; (Reference 18567875, See at least, claim 1 Reference 18419,976, See at least, Claim 1) generate, for each elapsed period, aggregated data including pairs (X', y') where X' identifies a varied subset of the types of feature values and y' indicates whether a second output obtained with the subset is identical to a first output, the first output being obtained when a predetermined number of types of feature values are input to the model of the each elapsed period, and a second output being obtained when each varied set of some types of feature values in the predetermined number of types of feature values is input to the model of each elapsed period; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) train, for each elapsed period, a binary determination model using the aggregated data; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) determine, across multiple humans, a required number of priority feature-value types based on outputs of the binary determination model; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) reset priority feature-value types associated with a model of an earlier elapsed period by inserting, based on the required number and places in an order of priority associated with a model of a latter elapsed period, one or more types of feature values from the latter elapsed period at a position corresponding to the required number, output acquisition-instruction data to a user terminal to cause acquisition of the priority types of feature values for a subsequent elapsed period; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) set, on a basis of the aggregated data, types of feature values to be associated with the model of each elapsed period; (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) . (Reference 18567875, See at least, Claim 1 Reference 18419,976, See at least, Claim 1) Described in 18567875 and 18419,976 par. 0057 specifications of the references. Note: Claim 7 and 12 are rejected with claim 1 analysis for being very similar. Rejection of Dependent Claims 2-12 The dependent claims of the instant application are obvious variants of the claims in the Kosaka references. The dependent claims add further limitations to this obvious base system and specification since both applications share very similar specifications. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Subject Matter eligibility Rejection 35 U.S.C 101 Claims 1-12 are rejected under 35 U.S.C. § 101 because the claimed subject matter is directed to a judicial exception (an abstract idea) without reciting elements that integrate the exception into a practical application or provide an inventive concept amounting to significantly more than the exception itself. Step 1: Statutory Categories Analysis The claims are directed to statutory subject matter, encompassing the following statutory categories: Machine (Claims 1-6): The language reciting "An information processing device comprising: at least one memory... and at least one processor" describes a concrete thing consisting of parts, aligning with the definition of a machine in MPEP § 2106.03. Process (Claims 7-11): The language reciting "An information processing method comprising: acquiring... generating... setting... receiving" defines a series of acts or steps, aligning with the definition of a process in MPEP § 2106.03. Manufacture (Claim 12): The language reciting "A non-transitory computer readable storage medium storing thereon a program" describes a tangible article given a new form through artificial efforts, aligning with the definition of a manufacture in MPEP § 2106.03. Having confirmed the claims are directed to statutory subject matter, the analysis proceeds to Step 2A Prong One. Step 2A, Prong One: Judicial Exception Analysis Step 2A, Prong One determines whether a claim "recites" a judicial exception, such as an abstract idea, by determining if the exception is "set forth or described" in the claim (MPEP § 2106.04). The whole invention is related to optimizing data collection efficiency by statistically analyzing which patient feature values (data types) are necessary to achieve a specific model output result and prioritizing those features for future collection (Spec., Abstract; para. [0054]). More specifically, the claims 1-12 are directed to a judicial exception because they recite the abstract idea of analyzing data correlations to reorder data acquisition priorities, which covers mental processes of evaluation and mathematical calculations. Under MPEP § 2111, the claims describe gathering information, performing statistical analysis (training/determining), and organizing future tasks (resetting priority) based on that analysis. Independent Claims Analysis Claim 1 An information processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute instructions to: acquire a model that is generated for each elapsed period, and has learned by machine learning to output a measure for a human by receiving input of a plurality of types of feature values representing a condition of the human; generate, for each elapsed period, aggregated data including pairs (X', y') where X' identifies a varied subset of the types of feature values and y' indicates whether a second output obtained with the subset is identical to a first output...; train, for each elapsed period, a binary determination model using the aggregated data; determine, across multiple humans, a required number of priority feature-value types based on outputs of the binary determination model; reset priority feature-value types associated with a model of an earlier elapsed period by inserting... one or more types of feature values...; output acquisition-instruction data to a user terminal to cause acquisition of the priority types of feature values for a subsequent elapsed period; set, on a basis of the aggregated data, types of feature values...; and receive the set types of feature values as modified by a user using the user terminal device. Note: The bolded portions represent additional elements evaluated in Prong Two and Step 2B. The non-bolded portions represent the abstract idea. Applicant language specification and drawing come from US public number. Abstract Idea Classification Rational Under their Broadest Reasonable Interpretation (MPEP § 2111), independent claim 1, 7 and 12 recites a process of acquiring data models, calculating data correlations, determining necessary data types, and reorganizing a priority list. This process aligns with the Mental Process category because it covers concepts performed in the human mind, specifically "observation, evaluation, judgment, [and] opinion" (MPEP § 2106.04(a)(2)(III)). The claim recites "acquire a model," "generate... aggregated data," "train... a binary determination model," "determine... a required number," and "reset priority feature-value types." These limitations describe the collection of information, the mathematical evaluation of that information (training/determining), and the making of a judgment to reorganize priorities (resetting). The claim involves Mental Process such as "aggregated data including pairs (X', y')," and "binary determination model." However, these calculations support the broader mental process of evaluating patient data to make a decision on what to collect next. The final steps in Claim 1, "output acquisition-instruction data to a user terminal" and "receive the set types... as modified by a user," specifically address managing the workflow and interaction between the system and the human user/patient. The overall objective is to efficiently organize the process of information collection concerning a human's condition for "measure proposals" (Spec., para. [0005], [0031]), aligning with the sub-grouping of managing personal behavior or relationships The specification supports this characterization, stating: "Thereby, the information processing device 100 can be used for assistance of decision-making by a user, or the like" (Spec., Abstract) and describes the goal as to "enhance the efficiency of collection of information" (Spec., para. [0006]). This confirms the invention is a tool to assist human cognitive tasks (decision-making and organization). Manual Replication Scenario (Human Equivalence) A human counselor mentally reviews a client's full medical history (acquires a model) and compares subsets of data to gauge their consistency (generates aggregated data). The counselor then determines the minimum set of essential health metrics needed for accurate assessment (trains the binary determination model and determines the required number). Based on this judgment, the counselor updates their priority checklist (resets priority) and sends instructions to the client detailing which specific health metrics to track for the next visit (outputs acquisition-instruction data). The counselor later receives the client's updated logs for review (receives modified feature values). Dependent Claims Analysis The dependent claims 2-11 are also directed to an abstract idea. Claims 2-6 (Device) & Claims 8-11 (Method): These claims recite under BRI specific mathematical or logical rules for performing the abstract analysis, such as "collect[ing] the second outputs" (Claim 2), calculating an "average, a minimum, or a mode" (Claim 3/8), using "places in an order of priority" (Claim 4/9), or generating the "binary determination model" (Claim 6/11). These limitations merely provide specific mathematical formulas or mental rules for performing the evaluation and judgment steps of the independent claim, falling squarely within the Mathematical Concepts (calculations) and Mental Processes (evaluation/logic) categories. Conclusion: Because the claims recite a judicial exception, the analysis proceeds to Step 2A, Prong Two. Step 2A, Prong Two: Integration into a Practical Application Step 2A, Prong Two evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This analysis determines if the additional elements impose a meaningful limit on the abstract idea, going beyond merely stating the idea and instructing a computer to "apply it". The claims fail this step because the added hardware operates merely as a generic executor for the abstract logic. Evaluation of Independent Claims 1, 7, and 12 Additional Elements Generic Computer Components and Machine Learning: The recitation of "at least one memory," "at least one processor," "machine learning," and "user terminal" fails to integrate the abstract idea because it: The additional elements of "at least one processor," "memory," "machine learning," and "user terminal" fail to integrate the abstract idea. As explained in MPEP § 2106.05(f), mere instructions to implement an abstract idea on a computer do not render a claim eligible. Here, the processor and memory are invoked merely as tools to perform the "acquiring," "collecting," and "setting" steps of the abstract idea. Furthermore, per MPEP § 2106.05(a), utilizing "machine learning" to "output a measure" is simply using a mathematical tool to achieve the abstract result, not an improvement to the technical functioning of the computer. Finally, the "user terminal" serves only as a data gathering/outputting endpoint (MPEP § 2106.05(h)). Consequently, these elements do not impose meaningful limits on the judicial exception. When viewed as a whole, the combination of these elements does not integrate the abstract idea. The claim describes a generic arrangement of standard computing hardware performing the abstract analysis (feature value optimization) via standard software tools (machine learning), which does not transform the abstract idea into an eligible application. Dependent claims does not add any additional elements merely narrow abstract idea explained in prong one. Because the claims are directed to an abstract idea without integrating it into a practical application, the analysis proceeds to Step 2B. Step 2B: Inventive Concept Analysis Step 2B asks whether the additional elements amount to an inventive concept that makes the claim significantly more than the abstract idea, evaluating, in particular, whether the elements are well-understood, routine, and conventional (WRC) (MPEP § 2106.05(d)). The claims fail because the limited technical elements relied upon are WRC and provide no inventive contribution. Generic Computer Components and Machine Learning Claims 1, and 7 and 12: The additional elements fail to provide an inventive concept. Under MPEP § 2106.05(f), the "processor" and "memory" are recited at a high level of generality, which the specification confirms are "typical information processing device[s]" (Spec., para. [0060]). The use of "machine learning" is a mathematical tool used to "learn the relationship of the experience data" (Spec., para. [0048]), which does not improve the computer's functionality (MPEP § 2106.05(a)) but rather uses the computer's processing power for abstract calculations. The "user terminal" merely links the result to a user environment (MPEP § 2106.05(h)). These elements represent well-understood, routine, and conventional activities in the field of information processing, as evidenced by the specification's admission of using "typical" hardware and standard processing units like GPUs and TPUs (Spec., paras. 0060, 0070). When viewed as a whole, the combination of hardware elements and abstract logic lacks an inventive concept. The collection of generic hardware elements performing routine computing functions in a conventional manner does not supply the "significantly more" required to transform the underlying abstract idea. Dependent claims does not add any additional elements merely narrow abstract idea explained in prong one. The claims are directed to an abstract idea and lack an inventive concept. Therefore, Claims 1-12 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2 and 7 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Afshar - US 2014O107461A1 and further in view of Chang - US 20210374562A1 and US10522253- Itu. Afshar teaches, Claim 1. An information processing device comprises: at least one memory configured to store instructions; (Afshar, par. 0032, 0056, 0058, 0059, 0061) and at least one processor configured to execute instructions to: acquire a model that is generated for each elapsed period, and has learned by machine learning to output a measure for a human by receiving input of a plurality of types of feature values representing a condition of the human; (Afshar par. 0026, 0032) Afshar describe the use of battery of test (Input_list), to targeted (output) particular patient diagnostic process. generate, for each elapsed period, aggregated data including pairs (X', y') where X' identifies a varied subset of the types of feature values and y' indicates whether a second output obtained with the subset is ; (Afshar, par. 0026, 0031-0035, 0042, 0044, 0046) Afshar describes the battery of test that include a group of tests, that are modified before, during, or after performance of any of the tests, test are also binding to epoch of care (EOC) data (elapsed period). The modification of battery test it is based on any of a variety of data. For example, based on partial_input as test 14a or test 14b (X’). Afshar also, compare the results based on the inputs for examples test 14a or test 14b or pre-surgical test or post-surgical test and based on the output of the partial inputs determine differences are significant modify tests. , across multiple humans, a ; (Afshar, par. 0035, 0065) Afshar describe a population analysis “cohort” to see if their result differ and identifies correlation or associations to recommend specific test by determine (binary model) if are significant or not. reset priority feature-value types associated with a model of an earlier elapsed period by inserting, based on the required number and places in an order of priority associated with a model of a latter elapsed period, one or more types of feature values from the latter elapsed period at a position corresponding to the required number, output acquisition-instruction data to a user terminal to cause acquisition of the priority types of feature values for a subsequent elapsed period; (Afshar, paragraphs 0016, 0020, 0022, 0032, 0034, 0036, 0046, 0060, 0065) Afshar describes a dynamic testing engine that modifies the specific tests (feature-value types) administered to a patient based on predictive models of care (EOC models) and cohort analysis. The system is capable of resetting priority feature-value types by "adding a test to the battery" or modifying a test sequence based on data from "EOC cohorts" (which provide the logic for latter elapsed period relevance). This insertion of new tests into the current battery is based on the system's determination of significance (ranking/required number). Furthermore, Afshar explicitly discloses outputting acquisition-instruction data by sending "instructions" and "recommendations" for these specific tests to user terminals (touch screens, monitors) to ensuring the necessary data (priority feature values) is acquired for the patient's care. set, on a basis of the aggregated data, types of feature values to be associated with the model of each elapsed period; (Afshar, paragraphs 0008, 0031, 0050, 0060, 0064) Afshar describes a mechanism for configuring the specific medical tests (types of feature values) to be administered to a patient by analyzing correlations within a dataset. The system "adapts the battery of tests" (sets feature value types) by selecting or modifying them based on "identified associations" between in-test and out-of-test data (aggregated data). This configuration is specifically performed to be "tailored to the patient's epoch of care," which describes associating the selected data types with the model for the relevant elapsed period. Obvious Rationales: Afshar teaches inputting a predetermined number of types of feature values (battery of tests) to a model of each elapsed period (epoch of care) to obtain a first output (diagnosis/assessment), describing that the system "may compare the results of the first test 14a to the results of the second test 14b" and "identify any differences... and determine whether any such differences are significant" (Afshar, para. [0034], [0035]). However, Afshar fails to disclose generating aggregated data including pairs (X', y') where X' identifies a varied subset of the types of feature values and y' indicates whether a second output obtained with the subset is identical to the first output. Chang teaches the Missing Element in bold, describing a process where the system generates a "modified set of feature values" (varied subset X') by modifying original values, inputs this subset to produce a "modified ranking" (second output), and calculates a "rank-biased overlap" (indicator y') between the modified ranking and the "original ranking" (first output), where the overlap value indicates identity (1 is identical, 0 is disjoint) (Chang, para. [0046]-[0048]). While Chang's RBO is a continuous score, Chang uses this score to make a binary determination (keep or remove feature) a calculated score that reaches a specific value (e.g., 1.0) to signify identity meets the limitation of 'indicating whether' the outputs are identical. . Therefore, the data pair effectively serves as an indication of identity sufficient to drive the binary decision process. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Afshar with Chang because both references address the problem of optimizing predictive models and feature selection to improve system efficiency and reduce overhead (Afshar, para. [0008]; Chang, para. [0039]). Integrating Chang's specific methodology of determining feature impact via subset variation (X') and output comparison (y') into Afshar's adaptive testing system would provide a data-driven mechanism to fulfill Afshar's goal of selecting the most relevant tests for a patient's epoch of care. Although Afshar is in the medical field and Chang is in information retrieval, both deal with the computational efficiency of machine learning feature selection. Therefore, a PHOSITA dealing with large-scale medical data (Afshar) would look to general computer science feature reduction techniques (Chang) to reduce processing overhead (MPEP 2144.03). A person of ordinary skill in the art would have been motivated to integrate the feature importance analysis using pairs (X', y') from Chang into the system of Afshar to achieve the benefit of determining exactly which features (tests) can be removed without affecting the output, as Chang teaches that this "identifies a first subset of the features to be removed... as the number of features with lowest importance scores" to "reduce... resource overhead" (Chang, abstract, para. [0017], [0019], [0039]). A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation; Chang provides a clear algorithmic definition for the feature variation and comparison process that is compatible with the standard predictive modeling structures described in Afshar. Afshar teaches the processing of data associated with timeframes, specifically disclosing that the system "identifies associations" using "aggregated data" such as sensor and epoch of care data to adapt medical tests (Afshar, para. 0008, 0065). However, Afshar fails to disclose to train, for each elapsed period, a binary determination model using the aggregated data. Chang teaches the Missing Element in bold, describing a framework that trains a simplified version of the machine learning model to determine if feature subsets are sufficient, stating "the training apparatus trains a simplified version of the machine learning model using a second subset of the features that excludes the first subset of the features" (Chang, para. 0096). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Afshar with Chang because both references address the optimization of data-driven decision systems to improve efficiency (Afshar, para. 0008; Chang, para. 0015). The combination makes the full limitation obvious by applying Chang's method of training a model to determine feature necessity (binary determination) to the specific "epoch of care" (elapsed periods) and medical data defined in Afshar, thereby allowing the medical system to scientifically validate which tests are required for each specific time period. Although Afshar is in the medical field and Chang is in information retrieval, both deal with the computational efficiency of machine learning feature selection. Therefore, a PHOSITA dealing with large-scale medical data (Afshar) would look to general computer science feature reduction techniques (Chang) to reduce processing overhead (MPEP 2144.03). A person of ordinary skill in the art would have been motivated to integrate the train... a binary determination model from Chang into the system of Afshar to achieve the benefit of improved computational efficiency, as Chang teaches that this framework "reduces latency, processor usage, memory usage, garbage collection, and/or other resource overhead... as well as executing the machine learning model using the features" (para. [0015]). A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation. The integration involves applying standard machine learning feature selection techniques (Chang) to a specific dataset (Afshar), which is a well-understood application of computational analytics. Afshar teaches determine, across multiple humans [EOC cohorts], priority feature-value types [medical tests] based on outputs of the binary determination model [significance/correlation analysis], describing identifying "associations... between the first patient's EOC model and the EOC models of other patients [multiple humans]" (para. [0036]) and using this to "adapt the battery of tests... [by] adding a test... [or] removing a test" (para. [0046]). However, Afshar fails to explicitly disclose determining a specific required number of priority feature-value types to be maintained. Chang teaches the Missing Element in bold, describing determining a required number of priority features, specifically "determining a number of features to be removed... to lower the resource overhead to a target resource overhead" (para. [0076]) and identifying "high-importance features [priority types]" based on "importance scores" (para. [0051]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Afshar with Chang because both references relate to optimizing the selection of input parameters (tests/features) to improve system efficiency. Applying Chang’s framework for determining a specific number of features to retain allows Afshar’s system to systematically balance clinical relevance with the "efficiency (quality and/or cost) of healthcare services" (Afshar, para. [0008]). Although Afshar is in the medical field and Chang is in information retrieval, both deal with the computational efficiency of machine learning feature selection. Therefore, a PHOSITA dealing with large-scale medical data (Afshar) would look to general computer science feature reduction techniques (Chang) to reduce processing overhead (MPEP 2144.03). A person of ordinary skill in the art would have been motivated to integrate the determination of a required number of features from Chang into the system of Afshar to achieve the benefit of streamlined processing, as Chang teaches that this method "reduces latency, processor usage, memory usage... associated with retrieving and/or calculating features" (para. [0039]). A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation; Chang provides a clear algorithm for "calculating importance scores" and "identifying a first subset... to be removed" (para. [0017]-[0018]), which is readily applicable to the feature sets (medical tests) described in Afshar. Afshar teaches receiving feature values using a user terminal device, describing "technology that is used to... receive test input from patients" such as "touch screens, styluses, keyboards" and "tablet computers" (Afshar, para. [0020]). However, Afshar fails to explicitly disclose receiving the set types of feature values as modified by a user (specifically where a user manually modifies values to configure the model inputs). Chang teaches the Missing Element in bold, describing that "The missing value may be set by users responsible for developing or maintaining the machine learning model and/or individual features used by the machine learning model" (Chang, para. [0044]) and describing "sets of modified feature values... include multiple features selected by users" (Chang, para. [0045]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Afshar with Chang because both references address the shared problem of optimizing computational models for efficiency and reducing overhead in data processing systems (Afshar, para. [0008]; Chang, para. [0015]). Integrating Chang's method of allowing users to modify feature values into Afshar's medical testing interface would allow administrators to manually tune inputs or set defaults to identify "high-importance features" (Chang, para. [0051]), thereby streamlining the "battery of tests" (Afshar, para. [0026]) administered via the user terminal. Although Afshar is in the medical field and Chang is in information retrieval, both deal with the computational efficiency of machine learning feature selection. Therefore, a PHOSITA dealing with large-scale medical data (Afshar) would look to general computer science feature reduction techniques (Chang) to reduce processing overhead (MPEP 2144.03). A person of ordinary skill in the art would have been motivated to integrate the user modification of feature values from Chang into the system of Afshar to achieve the benefit of reduced system load and latency, as Chang teaches that this process "reduces latency, processor usage, memory usage, garbage collection... and/or other types of resource overhead during retrieval and calculation of features" (Chang, para. [0015]). A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation, as both references rely on standard computer processor execution of data structures and user interfaces for data entry (Afshar, para. [0071]; Chang, para. [0095]). Afshar describes a system that acquires models generated for each elapsed period (EOC models) and compares a first patient’s test outcomes to a cohort’s test outcomes to determine whether differences are "significant" (Para [0026], [0035]), that is read in the above applicant language interpreted limitation, because Afshar's system uses a comparative model across time and across a population to make decisions (i.e., adapt the test battery) based on aggregated data. (Afshar, See at least, Para [0026], [0032], [0035]). However, Afshar does not describe generating aggregated data pairs of varied feature subsets and training a specific model to predict if the reduced subset is sufficient. Chang, describe a process of systematically generating aggregated data pairs (X', y') where X' is a varied subset of feature values and y' indicates the similarity of the outputs (RBO score where 1 is identical, 0 is disjoint) (Para [0046]-[0048]), that is read in the above applicant language interpreted limitation, because Chang's system uses the calculated RBO score (which functions as the aggregated data pair's label) to drive a decision process of retaining or removing features. (Chang, See at least, Para [0017], [0046]-[0048]). Itu teaches the specific solution to this problem: training a model to predict the outcome (uncertainty/sensitivity) based on input features rather than calculating it from scratch. Itu, describe training a "machine-learnt classifier" (Act 46) which is structurally the claimed "binary determination model" to predict statistical confidence (uncertainty/sensitivity) regarding a primary model's output (Act 26), that is read in the above applicant language interpreted limitation "train... a binary determination model", because Itu explicitly teaches using a machine learning algorithm to train a model to classify and output a measure of confidence (e.g., standard deviation) based on input features (Act 24, Act 26). (Itu, See at least, Abstract, Fig. 1, Act 24, 26, Fig. 6, Act 46). The combination of Afshar + Chang + Itu applications make obvious the full limitation because Chang's method for determining feature importance by repeatedly checking performance of a varied subset (using RBO calculation) is computationally expensive, especially in a real-time system like Afshar (Para [0062], [0063]). A PHOSITA would look to Itu's solution of replacing time-consuming statistical calculations with a fast, pre-trained machine-learnt predictor (the binary determination model) to improve computational efficiency and latency, which is the stated goal of all three references (Afshar, Para [0008]; Chang, Para [0015]; Itu, Abstract). A skilled Artisan in the art who read Afshar application, would combine Chang with Afshar, because both address optimizing predictive models and feature selection for efficiency, and combining Chang's feature importance methodology into Afshar's system provides a systematic, data-driven way to select the "most relevant tests" (feature values) for a patient's EOC. (Afshar, Para [0008]; Chang, Para [0039]). A skilled Artisan would be motivated to combine Afshar + Chang with Itu with expected predictive result, because integrating Itu's machine-learnt classifier allows the combined system to instantly predict, for any given proposed feature subset (Chang's X), the confidence/uncertainty of the outcome, thereby avoiding the high resource overhead of performing the full ranking and RBO comparison taught by Chang, thus reducing latency and improving the real-time adaptation of the test battery. (Itu, See at least Abstract: "Rather than relying on time consuming statistical analysis for each patient, a machine learnt classifier is trained to determine the uncertainty..."). Refer Chang, par. 0015 Afshar in combination with Chang and Itu teaches, Claim 2. The information processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to collect the second outputs that are obtained when each set of a varied number and/or combination of some types of feature values in the predetermined number of types of feature values is input to the model of each elapsed period. (Afshar, abstract, paragraphs 0031, 0046, 0064, 0065) Afshar describes a system that dynamically adapts medical testing by modifying the selection, content, or method of tests—essentially varying the sets of input data—based on an "Epoch of Care" model, which represents a patient's treatment over a specific duration. Since Afshar discloses a "test engine 120" (processor) that adapts a "battery of tests" by identifying associations between "sensor data" (feature values) and "EOC model data" (model of each elapsed period), and further modifies the test content or adds/removes tests to produce "modified test data" (varied number/combination of feature values), the reference describes the limitation under BRI where the processor collects results from these varied inputs within a time-based model. Note: Claim(s) 7 and 12 are rejected with the same claims 1-2 analysis above for being very similar. Claim(s) 3-6 and 8-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Afshar - US 2014O107461A1 and further in view of Chang - US20210374562 A1 and US10522253- Itu and US 20210158962A1- Allassonniere. Afshar in combination with Chang and Itu teaches, Claim 3. The information processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to: set, on a basis of the aggregated data, a required number of types to be associated with the model of each elapsed period; (Afshar, paragraphs 0029, 0035) Afshar describes using data from a group of patients with similar characteristics to evaluate significant differences in outcomes. Since Afshar defines a "cohort" as a group of patients (aggregated data across multiple humans) and compares a specific patient's test outcomes to the "test outcomes 26 of patients in the EOC cohort," it describes the basis of using aggregated human data to evaluate model parameters. and set the types on a basis of the required number, Afshar teaches the limitation of setting types on a basis of aggregated data, describing using data from a "cohort" defined as a "group of patients who have some set of characteristics that have been determined to be sufficiently similar" (para. [0029]) and comparing a specific patient's outcomes to the "test outcomes 26 of patients in the EOC cohort" (para. [0035]). However, Afshar fails to explicitly disclose determining the required number of types specifically as an average, a minimum, or a mode of the numbers of types of feature values across these multiple humans. Allassonniere teaches the Missing Element in bold, describing a diagnostic system that determines a specific number of features (questions) to analyze by calculating an average, stating the system is trained "to minimize on average the remaining number of questions to lead to the diagnosis" (para. [0014]) and predicting the "average number of question to reach a terminal state" (para. [0055]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Afshar with Allassonniere because both references describe computer-based medical diagnostic systems that utilize data analysis to optimize the efficiency of the testing or diagnostic process (Afshar, para. [0008]; Allassonniere, para. [0053]). The combination makes the full limitation obvious because applying Allassonniere's mathematical determination of the average number of questions (feature values) required to reach a conclusion (terminal state/diagnosis) to Afshar's cohort (multiple humans) data provides a specific, statistical mechanism for establishing the "required number" of tests/types defined in Afshar's adaptive testing model. A person of ordinary skill in the art would have been motivated to integrate the calculation of an average number of types from Allassonniere into the system of Afshar to achieve the benefit of reducing the number of necessary inquiries to the most efficient amount, as Allassonniere teaches that using this calculated average allows the system "to minimize on average the remaining number of questions to lead to the diagnosis" (para. [0014]), thereby reducing the burden on the user and computing resources. A PHOSITA would have had a reasonable expectation of success in combining the references because the modification required only ordinary skill and routine experimentation, as both references utilize computational models to process patient data and diagnostic variables, making the mathematical application of an average function to Afshar's dataset a straightforward integration of algorithmic logic. Afshar in combination with Chang and Itu and Allassonniere teaches Claim 4. The information processing device according to claim 3, wherein: types that are given places in an order of priority in advance are set in association with the model of each elapsed period, and the at least one processor is configured to execute the instructions to reset the types on a basis of the places in the order of priority and the required number of the types associated with the model of each elapsed period, including using places in the order of priority of a latter elapsed period to determine insertion into an earlier elapsed period. (Afshar, paragraphs 0027, 0031, 0046, 0064) Afshar describes a system that defines initial testing sequences ("pathways") within a care model and dynamically modifies them by adding or removing tests based
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Jun 27, 2025
Non-Final Rejection — §101, §102, §103
Oct 02, 2025
Response Filed
Dec 11, 2025
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month