Prosecution Insights
Last updated: April 19, 2026
Application No. 18/729,134

STATISTICAL DATA ACQUISITION DEVICE, DEGREE OF CONTRIBUTION CALCULATION DEVICE, TREATMENT ACTION SEARCH DEVICE, TREATMENT OBJECT SEARCH DEVICE, STATISTICAL DATA ACQUISITION PROGRAM, DEGREE OF CONTRIBUTION CALCULATION PROGRAM, TREATMENT ACTION SEARCH PROGRAM, AND TREATMENT OBJECT SEARCH PROGRAM

Non-Final OA §101§103
Filed
Jul 15, 2024
Examiner
LEE, ANDREW ELDRIDGE
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aizoth Inc.
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
4y 7m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
23 granted / 130 resolved
-34.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
171
Total Applications
across all art units

Statute-Specific Performance

§101
38.9%
-1.1% vs TC avg
§103
40.8%
+0.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed on 15 July 2024. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a statistical data acquisition unit in claims 1 and 11 a distribution adjusting unit in claim 3 a degree of contribution calculating unit in claims 8 and 12 a therapeutic action searching unit in claims 9 and 13 a therapy target searching unit in claims 10 and 14 The various units are being read in view of Applicant’s specification paragraphs [0031]-[0034] as software implemented by a processor (i.e., a generic off the shelf CPU). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-14 are rejected under 35 U.S.C. 101 because, the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Claims 11-14 are recite various “program causing a computer to function” limitations, as drafted, is a computer program (i.e., software) that, under its broadest reasonable interpretation, amounts to no more than an arrangement of signals (i.e., data), signal per se. The claim does not recite any structure that have a physical or tangible form to show that the computer program is anything other than a propagation of electrical signals (i.e., a product that does not have a physical or tangible form). As such, claims 11-14, does not fall within one of the four statutory categories of invention (i.e., a method, a machine, manufacture, or composition of matter). See MPEP 2106.03. The Examiner suggests changing, --program causing a computer to function-- to -- program stored on a non-transitory computer readable medium (CRM) causing a computer to function--, the Examiner notes this will overcome the non-statutory rejection as the broadest reasonable interpretation of computer program product would be a system. Further noting this will not raise written description issues. For Examination purposes claims 11-14, will be treated as systems for further 101 analyses below. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1 and 8-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite method and system for calculation or searching for data using statistical analysis of data. The limitations of: Claim 1, which is representative of claim 11 […] input an input data set to a [… model …], the [… model …] to predict and output, using learning data including learning attribute information representing an attribute of a past therapy target learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, wherein the input data set includes a plurality of input data elements including input attribute information representing an attribute of a therapy target and input therapy information representing a content of a therapeutic action with respect to the therapy target, the input data elements of the input data set having mutually different input attribute information and the input data elements of the input data set having identical input therapy information, […] thereby acquiring statistical data of a prediction result of a therapeutic action indicated by the input therapy information. Claim 8, which is representative of claim 12 […] input, to a [… model …] to predict and output using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, first input data including first input attribute information representing an attribute of a therapy target and first input therapy information representing a content of a therapeutic action with respect to the therapy target to thereby acquire a first prediction result, and to input, to the [… model …], second input data including second input attribute information and second input therapy information having a plurality of data items of the first input attribute information and the first input therapy information, one of the plurality of data items having been changed, to thereby acquire a second prediction result, […]thereby calculating a degree of contribution of the data items regarding the output of the [… model …] based on a difference between the first prediction result and the second prediction result. Claim 9, which is representative of claim 13 […] search for a therapeutic action suitable for a predetermined therapy target, based on prediction results of a plurality of mutually different therapeutic actions with respect to the predetermined therapy target, the prediction results having been acquired by inputting a plurality of input data elements to a [… model …] to predict and output, using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, wherein each of the input data elements includes input attribute information representing an attribute of a therapy target and input therapy information representing a content of a therapeutic action with respect to the therapy target, the input data elements including the input therapy information elements that are mutually different. Claim 10, which is representative of claim 14 […] search for a therapy target suitable for a predetermined therapeutic action, based on prediction results of the predetermined therapeutic action with respect to a plurality of mutually different therapy targets, the prediction results having been acquired by inputting a plurality of input data elements to a [… model …], to predict and output, using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of the therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, wherein each of the input data elements includes input attribute information representing an attribute of a therapy target and input therapy information representing a content of a therapeutic action with respect to the therapy target, the input data elements including the input attribute information elements that are mutually different. , as drafted, is a system, which under its broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) via human interaction with generic computer components. That is, by a human user interacting with various units, the claimed invention amounts to managing personal behavior or interaction between people, the Examiner notes as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. For example, via human interaction with various units, the claim encompasses collection of data, organization of the collected data into a model, use of the collected data to produce a result, and providing of the result to a human user for a human user to use in making treatment determinations to organize their treatment workflow. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of various units, which implements the abstract idea. The various units are recited at a high-level of generality (i.e., a general-purpose computers/ computer components implementing generic computer functions; see Applicant’s Specification paragraphs [0031]-[0034]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim recites the additional elements of “a learner, the learner having been trained to predict and output” to implement the abstract idea. The “a learner, the learner having been trained to predict and output” steps are recited at a high-level of generality (i.e., using and training in a generic manner a generic off-the shelf model) and amounts to generally linking the abstract idea to a particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of various units to perform the noted steps amounts to no more than mere instructions to apply the exception using generic hardware components. Mere instructions to apply an exception using a generic hardware component cannot provide an inventive concept (“significantly more”). Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a learner, the learner having been trained to predict and output” were considered generally linking the abstract idea to particular technological environment and/or extra-solution activity. The “a learner, the learner having been trained to predict and output” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Shrager (20200411199): see below but at least paragraph [0013]; Hazard (20210012246): paragraph [0015]; Mitsumori (20210118568): paragraph [0015]; training and use of a machine learning model is well-understood, routine and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible. Claims 2-7 are similarly rejected because either further define the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible. Claims 2 further describe use of a distribution to organize data, however does not recite any additional elements are therefore cannot provide a practical application and/or significantly more. Claim 3 recites the additional element of a distribution adjusting unit, however similar to the other various units already considered above and incorporated herein, the claim amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claim 4, describes determination of an error, for organization of human activity, however does not recite any additional elements are therefore cannot provide a practical application and/or significantly more. Claims 5-7, recites the additional element of using virtual data to train a model and re-training a model, however this is recited at a high-level of generality (i.e., using an iterative process and data that has been re-labeled) and amounts to generally linking the abstract idea to a particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using virtual data to train a model and re-training a model was considered generally linking the abstract idea to particular technological environment. This has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Hazard (20210012246): paragraph [0022]; Itu (20190139641): paragraph [0023]; Kasthurirarthne (20200312457): paragraphs [0010]-[0012]; using synthetic data and updating of models with new data is well-understood routine and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-2, 4, 7 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 20200411199 (hereafter “Shrager”), in view of U.S. Patent Pub. No. 20210118568 (hereafter “Mitsumori”). Regarding claim 1, Shrager teaches a statistical data acquisition apparatus (Shrager: Figures 1, 18-23, paragraph [0085], “One or more sets of training data may be generated and provided to a decision engine comprising one or more algorithms for making predictions… statistical methods and methods based on machine learning techniques. Statistical methods include penalized logistic regression, prediction analysis of microarrays (PAM), methods based on shrunken centroids, support vector machine analysis, and regularized linear discriminant analysis”) comprising: a statistical data acquisition unit configured to input an input data set to a learner (Shrager: Figures 1, 18-23, paragraph [0013], “The AI-based platform uses a combination of expert collective intelligence, AI, and machine learning to dynamically generate and test novel personalized treatment hypotheses”, paragraph [0044], “obtain patient case summaries and generate and/or validate treatment rationales associated with said summaries. Individual patients or their doctors are able to input case information through a clinical case capture tool presenting selectable clinical case templates. The clinical case template may have adaptive parameters that dynamically change according to previously entered parameters to capture the unique set of information for a particular case”, paragraph [0125], “the platforms, media, methods, and applications described herein include a digital processing device, a processor, or use of the same”), the learner having been trained to predict and output, using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target (Shrager: Figures 1, 18-23, paragraph [0046], “a treatment option can refer to a specific treatment (e.g., active agent and/or dosing regimen) or mode of treatment (e.g., chemotherapy, surgery)… Examples of targeted therapeutic agents include”, paragraphs [0081]-[0085], “generate models that predict one or more treatment options for a clinical case and/or a cohort comprising at least one clinical case. In some instances, machine learning methods are applied to the generation of such models… Such models can be generated by providing a machine learning algorithm with training data in which the expected output is known in advance, e.g., an output in which it is known that a clinical case having a specific data set (e.g., patient information and treatment information) achieved a particular outcome or a probability in which a particular outcome was achieved within a known group of clinical cases having specific data sets… The training data for the machine learning algorithms can be provided as follows. Clinical cases with known outcomes can be grouped into cohorts based on patient information and/or treatment information… the machine learning algorithm is provided with training data that includes the classification (e.g., treatment option, outcome, etc.), thus enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”, paragraph [0113], “treatment history and outcomes of a cohort of clinical cases for patients diagnosed with glioblastoma”), wherein the input data set includes a plurality of input data elements including input attribute information representing an attribute of a therapy target and input therapy information representing a content of a therapeutic action with respect to the therapy target (Shrager: Figures 1, 18-23, paragraphs [0044]-0046], “identify a similar patient cohort… a treatment option can refer to a specific treatment (e.g., active agent and/or dosing regimen) or mode of treatment (e.g., chemotherapy, surgery)”, paragraphs [0081]-[0085], “Clinical cases with known outcomes can be grouped into cohorts based on patient information and/or treatment information . For example, patient information can include patient age, gender, cancer type, cancer stage… Each feature space can comprise types of information about a case, such as biomarker expression or genetic mutations… the machine learning algorithm is provided with training data that includes the classification (e.g., treatment option, outcome, etc.), thus enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”. The Examiner notes patients with various differing attributes are grouped into cohorts based on similar therapeutic action applied for training of a model, which teaches what is required under the broadest reasonable interpretation), the input data elements of the input data set having […] input attribute information and the input data elements of the input data set having identical input therapy information (Shrager: paragraph [0009], “a pre-defined patient cohort”, paragraph [0044], “identify a similar patient cohort”, paragraph [0110], “the first cohort undergoing a specific treatment is experiencing outcomes that are statistically worse than a second cohort”. The Examiner notes cohorts using the same treatment is identical input therapy information under the broadest reasonable interpretation), the statistical data acquisition unit thereby acquiring statistical data of a prediction result of a therapeutic action indicated by the input therapy information (Shrager: Figures 1, 18-23, paragraph [0081], “predict one or more treatment options for a clinical case and/or a cohort comprising at least one clinical case”, paragraph [0085], “An algorithm may utilize a predictive model such as a neural network, a decision tree, a support vector machine, or other applicable model. Using the training data, an algorithm can form a classifier for classifying the case according to relevant features”, paragraph [0088], “calculate the posterior probabilities (e.g., of one or more treatment outcomes”, paragraph [0179], “the system or platform may provide its recommendations and/or calculated rankings to the treating physician”). Shrager may not explicitly teach (underlined below for clarity): the input data elements of the input data set having mutually different input attribute information and the input data elements of the input data set having identical input therapy information, Mitsumori teaches the input data elements of the input data set having mutually different input attribute information and the input data elements of the input data set having identical input therapy information (Mitsumori: paragraph [0049], “the first cohort undergoing a specific treatment is experiencing outcomes that are statistically worse than a second cohort having the same clinical profile but using a different treatment”). It would have been prima facie obvious to one of ordinary skill in the art at the time of the invention was made to combine the noted features of Mitsumori within teaching of Shrager since the combination of the two references is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of the mutually different features as taught by Mitsumori for the input features as taught by Shrager. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Regarding claim 2, Shrager and Mitsumori teach the limitations of claim 1, and further teach wherein the input attribute information has a value in accordance with a predetermined distribution (Shrager: paragraph [0087], “a posterior probability distribution”, paragraph [0101], “inclusion/exclusion criteria of a trial”; Mitsumori: paragraph [0030], “acquire medical data that is present in a distributed manner”. The Examiner notes attribute information can be in any distribution for inclusion/exclusion criteria of data (i.e., a predetermined distribution)). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 4, Shrager and Mitsumori teach the limitations of claim 1, and further teach the statistical data acquisition unit is configured to acquire an output error distribution that is a distribution of an output error of the trained learner, and, based on the output error distribution, correct a result of the therapeutic action indicated by the input therapy information (Shrager: paragraphs [0090]-[0094], “the errors from the initial classification of the first record are fed back into the network, and are used to modify the network's algorithm in an iterative process… an error may be calculated for the output nodes… Errors are then propagated back through the system”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 7, Shrager and Mitsumori teach the limitations of claim 1, and further teach the learner is configured to be trained with a first learning data set, and is thereafter re-trained with a second learning data set that is different from the first learning data set (Shrager: paragraph [0044], “the updated knowledge base can be used to further train and update the one or more algorithms”, paragraph [0110], “The classifier can continuously update based on new data (e.g., administered treatment(s) and outcome or result of the treatment(s)) and re-evaluate the ongoing clinical case. Thus, the decision engine may dynamically or continuously monitor a clinical case over time and recommend a change to the existing treatment options or a new treatment based upon the updated classifier when the ranking or prioritization of the treatment options changes”). The motivation to combine is the same as in claim 1, incorporated herein. REGARDING CLAIM(S) 11 Claim(s) 11 is/are analogous to Claim(s) 1, thus Claim(s) 11 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1. Claim(s) 3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 20200411199 (hereafter “Shrager”) and U.S. Patent Pub. No. 20210118568 (hereafter “Mitsumori”) as applied to claim 1 above, and further in view of U.S. Patent Pub. No. 20210012246 (hereafter “Hazard”). Regarding claim 3, Shrager and Mitsumori teach the limitations of claim 1,but may not explicitly teach a distribution adjusting unit configured to adjust, based on a plurality of distributions for determining values of the input attribute information, the statistical data corresponding to output data of the learner that has been trained, the output data having been obtained in response to the input data elements including the input attribute information in accordance with the respective distributions, and target statistical data that is a target of a user, the predetermined distribution such that the statistical data output from the trained learner approaches the target statistical data. Hazard teaches a distribution adjusting unit configured to adjust, based on a plurality of distributions for determining values of the input attribute information, the statistical data corresponding to output data of the learner that has been trained, the output data having been obtained in response to the input data elements including the input attribute information in accordance with the respective distributions, and target statistical data that is a target of a user, the predetermined distribution such that the statistical data output from the trained learner approaches the target statistical data (Hazard: Figures 1, 4, paragraph [0015], “use existing training data and, optionally, target surprisal to create synthetic data. In some embodiments, conditions may also be applied to the creation of the synthetic data in order to ensure that training data meeting specific conditions is created”, paragraph [0021], “the feature value may be sampled based on a uniform distribution, truncated normal, or any other bounded parametric or nonparametric distribution, between the feature bounds”, paragraph [0100], “the techniques may be used to create synthetic data that replicates users, devices, etc.”, paragraph [0248], “executed by processor 304”). One of ordinary skill in the art before the effective filing date would have found it obvious to include adjusting of attribute data to train a model using a predetermined distribution as taught by Hazard with the training and use of models for statistical determinations as taught by Shrager and Mitsumori with the motivation of “improving the quality of the model… improve the breadth of its observations” (Hazard: paragraph [0162]). Regarding claim 5, Shrager and Mitsumori teach the limitations of claim 1, may not explicitly teach wherein the learner is configured to be trained with virtual learning data generated based on statistical information regarding a therapeutic action performed in the past. Hazard teaches wherein the learner is configured to be trained with virtual learning data generated based on statistical information regarding a therapeutic action performed in the past (Hazard: Figures 1, 4, paragraph [0015], “use existing training data and, optionally, target surprisal to create synthetic data. In some embodiments, conditions may also be applied to the creation of the synthetic data in order to ensure that training data meeting specific conditions is created”, paragraph [0047], “synthetic data may be requested to direct sampling via a reinforcement learning process”, paragraph [0100], “the techniques may be used to create synthetic data that replicates users, devices, etc.”). The motivation to combine is the same as in claim 3, incorporated herein. Regarding claim 6, Shrager, Mitsumori and Hazard teach the limitations of claim 5, and further teach the learner is configured to be trained with virtual learning data indicated by statistical information regarding a therapeutic action performed in the past, the virtual learning data having been generated to conform to a distribution regarding the therapeutic action (Hazard: Figures 1, 4, paragraph [0015], “use existing training data and, optionally, target surprisal to create synthetic data. In some embodiments, conditions may also be applied to the creation of the synthetic data in order to ensure that training data meeting specific conditions is created”, paragraph [0021], “the feature value may be sampled based on a uniform distribution, truncated normal, or any other bounded parametric or nonparametric distribution, between the feature bounds”, paragraph [0100], “the techniques may be used to create synthetic data that replicates users, devices, etc.”, paragraph [0248], “executed by processor 304”). The motivation to combine is the same as in claim 3, incorporated herein. Claim(s) 8-10 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 20200411199 (hereafter “Shrager”) and U.S. Patent Pub. No. 20210012246 (hereafter “Hazard”). Regarding claim 8, Shrager teaches a [… statistical data …] calculating apparatus (Shrager: Figures 1, 18-23, paragraph [0085], “One or more sets of training data may be generated and provided to a decision engine comprising one or more algorithms for making predictions… statistical methods and methods based on machine learning techniques. Statistical methods include penalized logistic regression, prediction analysis of microarrays (PAM), methods based on shrunken centroids, support vector machine analysis, and regularized linear discriminant analysis”), comprising: [… a …] calculating unit configured to input, to a learner having been trained to predict and output using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, first input data including first input attribute information representing an attribute of a therapy target and first input therapy information representing a content of a therapeutic action with respect to the therapy target to thereby acquire a first prediction result (Shrager: Figures 1, 18-23, paragraph [0046], “a treatment option can refer to a specific treatment (e.g., active agent and/or dosing regimen) or mode of treatment (e.g., chemotherapy, surgery)… Examples of targeted therapeutic agents include”, paragraphs [0081]-[0085], “generate models that predict one or more treatment options for a clinical case and/or a cohort comprising at least one clinical case. In some instances, machine learning methods are applied to the generation of such models… Such models can be generated by providing a machine learning algorithm with training data in which the expected output is known in advance, e.g., an output in which it is known that a clinical case having a specific data set (e.g., patient information and treatment information) achieved a particular outcome or a probability in which a particular outcome was achieved within a known group of clinical cases having specific data sets… The training data for the machine learning algorithms can be provided as follows. Clinical cases with known outcomes can be grouped into cohorts based on patient information and/or treatment information… the machine learning algorithm is provided with training data that includes the classification (e.g., treatment option, outcome, etc.), thus enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”, paragraph [0113], “treatment history and outcomes of a cohort of clinical cases for patients diagnosed with glioblastoma”), and to input, to the learner, second input data including second input attribute information and second input therapy information having a plurality of data items of the first input attribute information and the first input therapy information, one of the plurality of data items having been changed, to thereby acquire a second prediction result, […] thereby calculating a [… statistic …] of the data items regarding the output of the learner based on a difference between the first prediction result and the second prediction result (Shrager: paragraph [0044], “the updated knowledge base can be used to further train and update the one or more algorithms”, paragraph [0084], “enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”, paragraphs [0090]-[0094], “the errors from the initial classification of the first record are fed back into the network, and are used to modify the network's algorithm in an iterative process… an error may be calculated for the output nodes… Errors are then propagated back through the system”, paragraph [0110], “The classifier can continuously update based on new data (e.g., administered treatment(s) and outcome or result of the treatment(s)) and re-evaluate the ongoing clinical case. Thus, the decision engine may dynamically or continuously monitor a clinical case over time and recommend a change to the existing treatment options or a new treatment based upon the updated classifier when the ranking or prioritization of the treatment options changes”). Shrager may not explicitly teach (underlined below for clarity): a degree of contribution calculating apparatus, comprising: a degree of contribution calculating unit configured to […], input, to the learner, second input data including second input attribute information and second input therapy information having a plurality of data items of the first input attribute information and the first input therapy information, one of the plurality of data items having been changed, to thereby acquire a second prediction result, the degree of contribution calculating unit thereby calculating a degree of contribution of the data items regarding the output of the learner based on a difference between the first prediction result and the second prediction result. Hazard teaches a degree of contribution calculating apparatus, comprising: a degree of contribution calculating unit configured to […], input, to the learner, second input data including second input attribute information and second input therapy information having a plurality of data items of the first input attribute information and the first input therapy information, one of the plurality of data items having been changed, to thereby acquire a second prediction result, the degree of contribution calculating unit thereby calculating a degree of contribution of the data items regarding the output of the learner based on a difference between the first prediction result and the second prediction result (Hazard: paragraph [0022], “the generated synthetic data may be compared against at least a portion of the existing training data, and a determination may be made… each synthetic data case generated using the techniques herein are compared to the existing training data”, paragraphs [0121]-[0125], “feature prediction contribution is determined as a conviction score. Various embodiments of determining feature prediction contribution are given herein. In some embodiments, feature prediction contribution can be used to flag what features are contributing most (or above a threshold amount) to a suggestion. Such information can be useful for either ensuring that certain features are not used for particular decision making and/or ensuring that certain features are used in particular decision making… define a feature information measure, such as familiarity conviction, such that a point's weighted distance contribution affects other points' distance contribution and compared”). One of ordinary skill in the art before the effective filing date would have found it obvious to include degree of contribution determination as taught by Hazard within the statistical determinations as taught by Shrager with the motivation of “improving the quality of the model… the system will improve the breadth of its observations” (Hazard: paragraph [0162]). Regarding claim 9, Shrager teaches a therapeutic action searching apparatus (Shrager: Figures 1, 18-23, paragraph [0007], “an artificial intelligence (AI) planning and search problem that requires the coordination of multiple agents—human and computer—to work together to efficiently search the voluminous and high dimensional space of cancer molecular subtypes and treatment combinations”, paragraph [0014], “The decision to try a specific therapy, alone or in combination, is typically… obtained by querying”, paragraph [0042], “efficiently search the high dimensional space of cancer molecular subtypes crossed with treatment combinations”) comprising: a therapeutic action searching unit configured to search for a therapeutic action suitable for a predetermined therapy target, based on prediction results of a plurality of […] therapeutic actions with respect to the predetermined therapy target, the prediction results having been acquired by inputting a plurality of input data elements to a learner (Shrager: Figures 1, 18-23, paragraph [0046], “a treatment option can refer to a specific treatment (e.g., active agent and/or dosing regimen) or mode of treatment (e.g., chemotherapy, surgery)… Examples of targeted therapeutic agents include”, paragraphs [0081]-[0085], “generate models that predict one or more treatment options for a clinical case and/or a cohort comprising at least one clinical case. In some instances, machine learning methods are applied to the generation of such models… Such models can be generated by providing a machine learning algorithm with training data in which the expected output is known in advance, e.g., an output in which it is known that a clinical case having a specific data set (e.g., patient information and treatment information) achieved a particular outcome or a probability in which a particular outcome was achieved within a known group of clinical cases having specific data sets… The training data for the machine learning algorithms can be provided as follows. Clinical cases with known outcomes can be grouped into cohorts based on patient information and/or treatment information… the machine learning algorithm is provided with training data that includes the classification (e.g., treatment option, outcome, etc.), thus enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”, paragraph [0113], “treatment history and outcomes of a cohort of clinical cases for patients diagnosed with glioblastoma”), the learner having been trained to predict and output, using learning data including learning attribute information representing an attribute of a past therapy target, learning therapy information representing a content of a therapeutic action with respect to the past therapy target, and a learning therapy result representing a result of the therapeutic action with respect to the past therapy target, from attribute information representing an attribute of a therapy target and therapy information representing a content of a therapeutic action with respect to the therapy target, a result of the therapeutic action with respect to the therapy target, wherein each of the input data elements includes input attribute information representing an attribute of a therapy target and input therapy information representing a content of a therapeutic action with respect to the therapy target, […] (Shrager: Figures 1, 18-23, paragraphs [0044]-0046], “identify a similar patient cohort… a treatment option can refer to a specific treatment (e.g., active agent and/or dosing regimen) or mode of treatment (e.g., chemotherapy, surgery)”, paragraphs [0081]-[0085], “Clinical cases with known outcomes can be grouped into cohorts based on patient information and/or treatment information . For example, patient information can include patient age, gender, cancer type, cancer stage… Each feature space can comprise types of information about a case, such as biomarker expression or genetic mutations… the machine learning algorithm is provided with training data that includes the classification (e.g., treatment option, outcome, etc.), thus enabling the algorithm to “learn” by comparing its output with the actual output to modify and improve the model”. The Examiner notes patients with various differing attributes are grouped into cohorts based on similar therapeutic action applied for training of a model, which teaches what is required under the broadest reasonable i
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542210
WEARABLE DEVICE AND COMPUTER ENABLED FEEDBACK FOR USER TASK ASSISTANCE
2y 5m to grant Granted Feb 03, 2026
Patent 12154077
USER INTERFACE FOR DISPLAYING PATIENT HISTORICAL DATA
2y 5m to grant Granted Nov 26, 2024
Patent 12040070
RADIOTHERAPY SYSTEM, DATA PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Jul 16, 2024
Patent 12027251
SYSTEMS AND METHODS FOR MANAGING LARGE MEDICAL IMAGE DATA
2y 5m to grant Granted Jul 02, 2024
Patent 11942189
Drug Efficacy Prediction for Treatment of Genetic Disease
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
51%
With Interview (+33.5%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month