Prosecution Insights
Last updated: April 19, 2026
Application No. 18/485,573

MATERNAL AND INFANT HEALTH INSIGHTS & COGNITIVE INTELLIGENCE (MIHIC) SYSTEM AND SCORE TO PREDICT THE RISK OF MATERNAL, FETAL, AND INFANT MORBIDITY AND MORTALITY

Final Rejection §101§103§112§DP
Filed
Oct 12, 2023
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cognitivecare Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The status of the claims as of the response filed 12/4/2025 is as follows: Claims 1 and 5-20 are currently amended. Claims 2-4 are original. Claims 1-20 are currently pending in the application and have been considered below. Response to Amendment Double Patenting Rejection The claims have been sufficiently amended such that they diverge in scope from the claims of US patent 11854706 B2, and thus the corresponding double patenting rejections are withdrawn. Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections for claims 1-20 are upheld. Rejection Under 35 USC 103 The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 101 On page 14 of the response filed 12/4/2025 Applicant argues that the machine learning model of the claims “is not generic; rather, it is expressly limited to one that has been trained by particular operations, including identifying the type of heterogeneous clinical data, segregating structured and unstructured data, preprocessing such data, and defining a neural network architecture comprising a hidden layer and a feed-back layer,” which “provide concrete detail regarding the technical pipeline by which the model is trained and thus constitute a meaningful limitation that improves the processing of heterogeneous medical data and training a machine learning model.” Applicant’s arguments are fully considered, but are not persuasive. First, Examiner notes that this sequence of training steps is no longer positively recited by any of the claims, rather being recited in a “wherein” clause that describes how the models executed by the positively recited steps and functions of the invention has previously been trained (i.e. outside the scope of the positively claimed invention – see “claim interpretation” section below). Further, even if these training steps were positively recited as Applicant asserts, they do not reflect an improvement in the processing of data for machine learning model training. The steps of acquiring patient data records, identifying and segregating the records by type into structured and unstructured data, preprocessing the data, and using the training data to define the architecture of and train a predictive model describe the basic process of fitting a predictive model to a set of relevant data, which fits into the “certain methods of organizing human activity” grouping of abstract idea because a human actor such as a clinician or researcher could perform such data collection and processing steps to come up with a fitted predictive model. The fact that the predictive model being trained/fitted is a machine learning model comprising a neural network with hidden and feedback layers is addressed as an additional element; in the instant case, this feature merely serves instructions to “apply” the abstract idea of fitting a predictive model in a computerized environment as a high-level type of computerized model (e.g. a neural network). On page 14 Applicant argues that “inference is performed by applying learned parameters of the pre-trained model to new maternal and fetal patient data,” which “ensures that predictions are tied to concrete model parameters and cannot be performed mentally.” Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that the claims have not been characterized as reciting a mental process, so ensuring that predictions cannot be performed mentally is not a relevant consideration. Examiner submits that analyzing new data with learned parameters of a model is still a procedure that could be performed by a human actor executing a mathematical model to make risk score predictions about new patient data, which also reflects mathematical concepts. The executed model being a machine learning / neural network model merely serves to automate the otherwise-abstract risk score calculation/prediction functions such that they are achieved via a computerized model, and do not provide integration into a practical application. On page 14 Applicant argues that “the claim requires an exploratory data analysis module configured to compute incidence and prevalence across patient cohorts and to generate synthesized results showing variations in the behavior of health risk factors across the cohorts” which “is not routine output but instead reflects a specific concrete module for generating insights.” Applicant’s arguments are fully considered, but are not persuasive. Use of statistical analysis techniques to calculate prevalence and incidence rates across patient cohorts so that risk factor behaviors can be compared fits into the mathematical concepts grouping of abstract idea, as well as certain methods of organizing human activity because a human actor such as a clinician could perform statistical calculations and analysis to compute and compare such results across patient cohorts. The fact that such analysis is performed with an unspecified “exploratory data analysis module” (which is presumed to be some type of computing component executing software instructions) merely digitizes and/or automates this otherwise-abstract analysis function such that it occurs digitally and does not provide integration into a practical application. On pages 14-15 Applicant argues that “the results are rendered through a graphical dashboard that enables clinicians to interpret the outputs and, critically, supports the design of prevention and intervention strategies” which “is not a generic display of numbers but instead is a particular improvement in the way clinicians’ access, interact, and interpret complex risk data, directly analogous to the graphical user interface improvement upheld in Core Wireless v. LG.” Applicant’s arguments are fully considered, but are not persuasive. Examiner respectfully disagrees that the instant claims are analogous to those found eligible in Core Wireless. The claims in Core Wireless recited a specific improvement over prior systems; specifically, Core Wireless identified an improvement to an interface to address the problems of existing interfaces of devices with small screens as described in the specification of the patent in Core Wireless. In contrast, the specification in the instant application does not describe deficiencies of prior art system interfaces, instead merely describing a method for displaying information related to maternal health risks via generic visualization techniques like bar charts, pie charts, scatter plots, etc. (see Figs. 6-16). The improvement in Core Wireless was not merely the display of specific data, but the overall improvement to small screen user interfaces that the combination of elements (e.g. specifying a particular manner in which a summary window must be accessed, requiring the application summary window to list a limited set of data, and requiring that the device applications exist in a particular state) provided. Applicant has not supplied evidence of such a technical, interface-driven improvement being provided by the instant claims, and merely asserts that the display of data is allows a clinician to interpret the data in an improved way. Examiner maintains that the display of maternal risk scores at a dashboard so that a clinician may understand an analysis and make decisions about patient care does not amount to a technical improvement to an interface, and instead merely digitizes the otherwise-abstract function of visualizing clinical analytics data. On pages 15-16 Applicant argues that the combination of pre-trained machine learning model with learned parameters, exploratory data analysis (EDA) module, graphical dashboard rendering synthesized EDA results, and integration with clinical strategy amount to significantly more than the abstract idea. Applicant specifically alleges similarities between the EDA module and a specific, rule-based processing improvement as in McRO, and the dashboard with an improvement to human-computer interaction as in Core Wireless. Applicant also asserts that use of the EDA outputs to support clinical design of prevention and intervention strategies establishes a practical application beyond data analysis, and concludes that “the claim improves the way computers process and structure data.” Applicant’s arguments are fully considered, but are not persuasive. The characterization of the dashboard as providing an improvement to interface technology has been addressed above. Additionally, Examiner respectfully disagrees that the EDA module is analogous to the improvement found in McRO. In McRO, the claimed invention recited a very specific set of rules that allowed a computer to perform animation in a manner that was previously only performable by human animators. The very fact that the animation could not be previously performed by computers and that the rules applied by the claimed invention solved this problem was the reason the claimed invention in McRO was found to be not directed to an abstract idea by improving an existing technological process. Here there is no evidence on record that establishes that the claimed invention was only previously performable by humans in the manner of McRO, nor are there any specific steps recited for performing the exploratory analysis beyond using “statistical techniques.” The claimed invention thus does not provide an analogous technological improvement. Examiner also notes that the underlying functions of the predictive model (i.e. generating predicted risk scores based on patient data), EDA module (i.e. performing exploratory statistical analysis across patient cohorts), and dashboard (i.e. visualizing clinical analytics data) are part of the abstract idea itself, because they represent mathematical concepts and/or steps that a human actor managing their personal behavior and/or interactions with others could achieve, as explained elsewhere. Because these functions are part of the abstract idea itself, they do not provide “significantly more” than the abstract idea and thus do not confer eligibility (see MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” See also 2106.05(a)(II): “it is important to keep in mind that an improvement in the abstract idea itself… is not an improvement in technology.”) Similarly, the intended use of the EDA outputs being used to support a clinician’s clinical decision-making process about prevention and intervention strategies for a patient does not provide “significantly more” than the abstract idea, because it is again part of the abstract idea itself; performing clinical analytics and using them as a basis for clinical decision-making is a bedrock of clinician workflows when evaluating and treating patients. Applicant has not provided evidence that the specific combination of additional hardware and software elements (i.e. the machine learning / neural network model, EDA module, and dashboard interface) is an unconventional combination. On the contrary, Pgs 9-10 of Applicant’s specification appear to describe the various modules and algorithms of the invention as being be embodied via various known computing devices with built-in input and output devices (e.g. smart phones, iPads, desktop/personal computers, etc.), leaving one of ordinary skill in the art to understand that any generic computer processor elements capable of executing software code may be used. Examiner further notes that various machine learning techniques, including neural networks, are exemplarily described on Pgs 6, 8, 17, & 22-25 of Applicant’s specification as being known alternative choices of algorithms for analysis of data, leaving one of ordinary skill in the art to understand that many types of known machine learning models (including various known types of neural networks) may be utilized to implement the invention. Further, it is well-understood, routine, and conventional to utilize neural network models for the purpose of clinical prediction (including pregnancy-related event prediction), as evidenced by at least Col2 L1-48 & Col24 L6-23 of Lapointe et al. (US 6556977 B1); [0020] & [0070] of Hamilton et al. (US 20030187364 A1); and [0096] of Roberts et al. (US 20190133536 A1). On pages 17-18 Applicant argues that the closed-loop and self-learning nature of the model updating process recited in amended claim 14 “goes beyond mental processes by requiring specific computer architecture to function” and provides application “in a defined technical field, namely maternal and fetal health monitoring, using structured input signals such as blood pressure and fetal heart rate, rather than generic data analysis.” Applicant’s arguments are fully considered, but are not persuasive. Examiner respectfully disagrees that maternal and fetal health risk scoring is a technical field, and submits that the claims rather apply known machine learning / neural network computing architecture to the otherwise-abstract business practice of clinical analytics using commonly-measured patient data like blood pressure and heart rate. The closed-loop and self-learning nature of the model does not improve upon the underlying architecture of machine learning models or their training methods, and rather describes how machine learning is known to work, i.e. by continuously learning and updating its parameters based on new data (see at least [0017]-[0018] of Chowdhry et al. (US 20210098133 A1); [0021] of Pengetnze et al. (US 20190122770 A1); [0064] of Amarasingham et al. (US 20150213225 A1); [0038], [0046], & [0050] of Yom-Tov et al. (US 20140377727 A1); and Col6 L32-50 & Col24 L6-24 of Lapointe et al. (US 6556977 B1)). Accordingly, these aspects of claim 14 do not amount to a technical improvement to a technical field, and do not provide integration into a practical application. On pages 18-19 Applicant argues that the closed-loop and self-learning nature of the model updating process recited in amended claim 14 constitute an unconventional improvement over conventional systems “that rely on static models or retrospective batch training” and thus amounts to an inventive concept. Applicant’s arguments are fully considered, but are not persuasive. As explained above, the closed-loop and self-learning nature of the machine learning model does not constitute a technical improvement or inventive concept, instead describing how common machine learning deployments operate to incorporate new data, as evidenced by at least [0017]-[0018] of Chowdhry et al. (US 20210098133 A1); [0021] of Pengetnze et al. (US 20190122770 A1); [0064] of Amarasingham et al. (US 20150213225 A1); [0038], [0046], & [0050] of Yom-Tov et al. (US 20140377727 A1); and Col6 L32-50 & Col24 L6-24 of Lapointe et al. (US 6556977 B1). Further, Applicant has not highlighted the use of static or batch-learned models as a technical problem in the specification, and instead appears to describe application of known AI, mathematical, and statistical analyses to the specific field of maternal/fetal health risk prediction to improve this business practice, rather than improving technical deficiencies or problems with the underlying computing and machine learning technology (see Pgs 1-2, 9-10, & 28 of Applicant’s specification). For the reasons outlined above, the 35 USC 101 rejections are upheld for claims 1-20. Rejection Under 35 USC 103 On pages 19-20 Applicant argues that Amarasingham “does not describe cohort-based incidence and prevalence computations, a dedicated exploratory data analysis module, or graphical dashboards for clinician-facing support.” Applicant’s arguments are fully considered, but are not persuasive. Examiner concedes that Amarasingham does not explicitly disclose the newly-introduced feature of prevalence computations, but submits that it does teach cohort-based incidence computations, a dedicated exploratory data analysis module, and graphical dashboards for clinician-facing support (see [0066]-[0067], noting clinician-facing dashboards with population-level statistical analytics and comparison of incidence rates of predicted clinical events based on various types of exploratory analyses, e.g. as described in [0048]). On page 20 Applicant alleges various deficiencies of Tupin, Roberts, and Moreira with respect to the newly-added exploratory data analysis and dashboard visualization limitations. Applicant’s arguments are fully considered, but are moot because these references are not relied upon to teach these aspects of the claims. On page 20 Applicant argues that “Achieving the claimed system would require a fundamental redesign of the prior art’s purpose” and that “the cited art provides no teaching, suggestion, or motivation to perform EDA-based visualization of incidence and prevalence across clinical cohorts,” thereby rendering the combination non-obvious. Applicant’s arguments are fully considered, but are not persuasive. Examiner first notes that the holding in W.L. Gove v. Garlock was related to whether it is considered “public use” for a third party to commercially use a process in secret under pre-AIA 35 USC 102(b), and makes no mention of determining a fundamental redesign of the prior art’s purpose. Further, Examiner submits that modifying the clinical analytics visualization dashboards (which include population-based analytics as well as incidence of predicted events as explained above) of Amarasingham to also include analytics related to prevalence across clinical cohorts as in the newly-cited Edwards reference would not render the prior art unsatisfactory for its intended purpose nor change the principle of operation of the Amarasingham reference. Merely using different computational techniques to perform and display additional types of clinical analytics would not render the invention inoperable, require any kind of substantial physical redesign or reconstruction, or change the principle of operation of the clinical analytics system of Amarasingham. The computer infrastructure of Amarasingham would be capable of undertaking and visualizing this type of analysis, and a mere reprogramming or expansion of computing functions does not support a finding of a change in the principle of operation because the underlying computerized clinical analytics and dashboard visualization operations are maintained. This case is not analogous to the example cited in MPEP 2143.01(VI) because the physical elements of that proposed combination were not compatible and combining a rigid sealing member with a resilient sealing member would have required a substantial physical reconstruction and modification of basic operational principles, whereas no such redesign or change in operational principle would be required to expand computational capabilities of a clinical analytics method as in the instant case. Finally, a teaching, suggestion, and/or motivation for combining the Amarasingham and newly-cited Edwards references has been explained in the updated 35 USC 103 rejections below. On pages 20-21 Applicant argues that the models of Amarasingham are updated with feedback in an “automated, retrospective” manner “without clinician involvement”, in contrast to the instant claims’ reliance on clinician-provided feedback to continuously update the self-learning model. Applicant concludes that “Amarasingham’ s retrospective batch tuning process cannot reasonably be equated with the claimed feedback layer that supports real-time, continuous, closed-loop learning” and “modifying Amarasingham to incorporate such a structural, clinician-drive feedback layer would require a fundamental redesign of its principle of operation.” Applicant’s arguments are fully considered, but are not persuasive. Para. [0064] of Amarasingham describes the model self-learning tuning process as occurring “periodically,” which Examiner maintains encompasses ongoing/continual learning as time progresses. Though the tuning is disclosed as being performed without direct human supervision, [0064] states that the self-learning process relies on comparing “the actual observed outcome of the event to the predicted outcome” so that the model can correct for any inaccuracies, while [0079]-[0080] describe how a clinician user may directly input “observations and comments about the patient” into the system, e.g. to “confirm, deny, or express uncertainty about a patient’s disease or adverse event identification” or otherwise dispute/confirm the system outputs. Though the reference does not explicitly disclose that such inputs are a source of the “actual observed outcomes” of [0064] (and does not specify the source in that paragraph), one of ordinary skill in the art would readily understand that these inputs represent “actual observed outcomes” and other clinician feedback related to the accuracy of the system predictions and that the “actual observed outcomes” of [0064] necessarily originate from somewhere, leading them to understand that the actual observed outcomes could reasonably originate from the clinician-provided feedback inputs of [0079]-[0080]. Further, Examiner respectfully disagrees that any “fundamental redesign” of the system would be required to achieve such functionality; the system clearly already allows a clinician user to input feedback related to actual observed outcomes of a patient and allows for ongoing retraining of the analytics models with actual observed outcomes to improve the accuracy of the models in a self-learning manner. On page 21 Applicant argues that the combination of Amarasingham and Tupin is erroneous “nowhere does Tupin disclose or suggest a machine learning-based predictive model, much less one incorporating a feedback architecture” and “Amarasingham… contains no disclosure or suggestion of application to maternal or fetal health monitoring” such that they “address different problems in different fields, and there is no teaching, suggestion, or motivation to combine them.” Applicant’s arguments are fully considered, but are not persuasive. Though neither of these references fully teach or suggest the amended claims in isolation, Examiner maintains that when considered as a combination they do sufficiently teach or suggest the subject matter at issue in the argument. Both references are in the field of clinical data collection and analysis and are utilized for the purpose of guiding patient care interventions. Amarasingham discloses condition-specific predictive machine learning models with self-learning capabilities as explained above, and further contemplates evaluation of a wide array of real-time and historical clinical and non-clinical data (see [0030]-[0036]). Specific and non-limiting examples of predicted conditions and adverse events of interest are disclosed in [0061] & [0078], with [0061] clearly indicating that “others” are also contemplated. Tupin teaches that there is a great need for pregnancy-related monitoring and risk quantification of both mother and baby so that appropriate pre- and post-natal care may be tailored to the pregnancy (see [0007]) and discloses algorithmic methods of calculating risks for pregnancy-related outcomes based on monitored data (Tupin [0021], [0086], [0114]-[0119]). Examiner thus maintains that one of ordinary skill in the art considering both of these references would have found it obvious to apply the risk prediction modeling methods of Amarasingham to the specific field of pregnancy-related outcomes because Tupin shows that there is a great need for this type of risk quantification in the obstetrics field so that appropriate pre- and post-natal care may be tailored to the pregnancy. The result of such a combination would include the development of condition-specific predictive models (as in Amarasingham) that can analyze clinical data inputs (as in Amarasingham and Tupin) that are pregnancy-related (as in Tupin) for the specific purpose of outputting pregnancy-related risk scores for both maternal risk factors and fetal risk factors. On pages 21-22 Applicant argues that “Amarasingham’s retrospective EHR framework is structurally and operationally incompatible with Tupin’s real-time monitoring device” and combining the two references would “require significant redesign of both systems.” Applicant further alleges that the combination relies on impermissible hindsight reasoning. Applicant’s arguments are fully considered, but are not persuasive. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). In the instant case, Examiner maintains that the combination of references is proper, as explained above. Further, Examiner respectfully disagrees that the combination would require “significant redesign” of either system. Amarasingham teaches collection and analysis of real-time data streams, e.g. continuously collected vital sign data, data from a variety of sensors, etc. as in [0029], [0045], [0048]. The system of Amarasingham could therefore easily incorporate collection and analysis of the specific maternal and fetal vital sign data disclosed by Tupin without any kind of redesign or structural modifications. Claim Interpretation Claim 1 positively recites the following limitations relating to the machine learning model: a processor executing a pre-trained machine learning model stored in a computer-readable memory; wherein the system is configured to: receive by the pre-trained machine learning model, a new patient record data associated with a patient, wherein the new patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; apply learned parameters of the pre-trained machine learning model, to the new patient record data; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk scores each represent a probability of a maternity-related healthcare event of the patient and the fetal risk scores each represent a probability of a fetus-related healthcare event of the fetus of the patient; The claim further describes how the model has been pre-trained prior to its execution by the processor by utilizing a “wherein” clause: wherein the pre-trained machine learning model is obtained by: acquiring a plurality of patient record data from a database, wherein the patient record data comprises a text data and an image data; identifying a data type of the patient record data; segregating, the patient record data into a structured data and an unstructured data; pre-processing the structured data and the unstructured data and generate preprocessed data; defining an architecture of a machine learning model, wherein the machine learning model comprises a neural network comprising a hidden layer and a feed-back layer; training the machine learning model with the preprocessed data to obtain the pre-trained machine learning model. “Wherein” clauses and their equivalents are discussed in MPEP 2111.04: “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) ‘adapted to’ or ‘adapted for’ clauses; (B) ‘wherein’ clauses; and (C) ‘whereby’ clauses.” In the instant case, the “wherein” clause indicated above that describe how the model is pre-trained prior to its execution and use by the positively recited limitations do not limit the claim to a particular structure beyond specifying that the machine learning model comprises a neural network comprising a hidden layer and a feed-back layer, nor do they indicate that such training steps are positively performed within the scope of the claim. Accordingly, the broadest reasonable interpretation of the scope of the claim will be considered to include executing a pre-trained machine learning model comprising a neural network with a hidden layer and a feedback layer that has been trained in any manner to perform the positively recited functions of (1) receiving a new patient record comprising maternal blood pressure and fetal heart rate, (2) applying learned parameters of the model to the new patient record, and (3) predicting at least one maternal risk score representing a probability of a maternity-related healthcare event and/or fetal risk score representing a probability of a fetus-related healthcare event. Claims 9 and 15 recite substantially similar “wherein” clauses as claim 1 that also do not positively limit the structure or scope of their respective parent claims and thus the corresponding “wherein” clauses related to the previously taken training steps are not considered to be patentably limiting beyond specifying that the model is a neural network comprising a hidden layer and a feed-back layer. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an exploratory data analysis module” that performs an exploratory data analysis and generates synthesized results for showing variations in behavior of at least one health risk factor in claims 8 and 17; and “an input module” that receives a patient record data associated with a patient in claim 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 8-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 8 and 17 recite “an explanatory analysis module,” while claim 14 recites “an input module.” As explained in the claim interpretation section above, the term “module” serves as a generic placeholder that does not denote any particular structure that would allow each element to perform their claimed functions, and thus these claims invoke 35 U.S.C. 112(f). Thus, Examiner turns to the specification to disclose the specific structures of the various “modules” that allow them to perform their respective functions. However, nowhere does the specification describe the particular structure of the “explanatory analysis module” or the “input module.” At most, Pgs 9-10 of the specification describe how the invention “has various components with a suite of Artificial Intelligence algorithms developed using software such as Python and R” and “utilizes a software ecosystem/platform with robust computation infrastructure encompassing components for… using diverse technologies for collection, processing, storage and distribution of data such as Smart Phones, iPads, Desktop/Personal Computers, Stand-alone/On-Premise/Cloud Servers etc.” These high-level descriptions of computing infrastructures do not provide any limitations or specificity regarding the actual structure of the exploratory analysis module or the input module, and Examiner notes that the terms “exploratory analysis module” and “input module” are not present anywhere in the specification. Applicant has thus not provided an adequate written description of either of these modules that show that their physical structures would support their claimed functions. Therefore, the claims do not comply with the written description requirement and are rejected under 35 U.S.C. 112(a). Note that dependent claims 9-13 and 15-20 are also rejected on this basis because they inherit the language of claims 8 and 14, respectively, without resolving the written description issues set forth above. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitations “exploratory analysis module” and “input module” in claims 8, 14, and 17 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. At most, Pgs 9-10 of the specification describe how the invention “has various components with a suite of Artificial Intelligence algorithms developed using software such as Python and R” and “utilizes a software ecosystem/platform with robust computation infrastructure encompassing components for… using diverse technologies for collection, processing, storage and distribution of data such as Smart Phones, iPads, Desktop/Personal Computers, Stand-alone/On-Premise/Cloud Servers etc.” These high-level descriptions of computing infrastructures do not provide any limitations or specificity regarding the actual intended physical structure of the exploratory analysis module or the input module, and Examiner notes that the terms “exploratory analysis module” and “input module” are not present anywhere in the specification. Therefore, the scope of each claim is indefinite because it is unclear what the actual physical structure performing the recited functions is intended to be such that each claim is indefinite and is rejected under 35 U.S.C. 112(b). For purposes of examination, the exploratory analysis module will be interpreted as a software module or algorithm executing on a processing device, and the input module will be interpreted as any means of providing data to a computer. Note that dependent claims 9-13 and 15-20 are also rejected on this basis because they inherit the language of claims 8 and 14, respectively, without resolving the indefiniteness issues set forth above. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 1-7 and 14-20 are directed to systems (i.e. machines) and claims 8-13 are directed to a method (i.e. a process). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claims 1, 8, and 14 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 1 (as representative) recites: A system comprising: a processor executing a pre-trained machine learning model stored in a computer-readable memory; wherein the pre-trained machine learning model is obtained by: acquiring a plurality of patient record data from a database, wherein the patient record data comprises a text data and an image data; identifying a data type of the patient record data; segregating, the patient record data into a structured data and an unstructured data; pre-processing the structured data and the unstructured data and generate preprocessed data; defining an architecture of a machine learning model, wherein the machine learning model comprises a neural network comprising a hidden layer and a feed-back layer; training the machine learning model with the preprocessed data to obtain the pre-trained machine learning model; wherein the system is configured to: receive by the pre-trained machine learning model, a new patient record data associated with a patient, wherein the new patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; apply learned parameters of the pre-trained machine learning model, to the new patient record data; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk scores each represent a probability of a maternity-related healthcare event of the patient and the fetal risk scores each represent a probability of a fetus-related healthcare event of the fetus of the patient; perform an exploratory data analysis by an exploratory data analysis module executed by the processor, a statistical exploration of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors to compute incidence and prevalence values for one or more patient cohorts, wherein the exploratory data analysis module is configured to generate synthesized results for showing variations in a behavior of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors across the patient cohorts; and render by a graphical dashboard, the synthesized results of the exploratory data analysis as graphical presentations enabling a clinician for interpretation and assessment of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors; and wherein the exploratory data analysis is configured to support the clinician in designing one or more of a prevention strategy and an intervention strategy. But for the recitation of generic computer components like a processor, memory, machine learning, and a graphical dashboard, the italicized functions, when considered as a whole, describe a clinical data analysis and predictive risk scoring operation that could be achieved via mathematical concepts and/or by a human actor such as a clinician or other medical professional managing their personal behavior and/or interactions with others. For example, a clinician could execute a predictive model previously trained/fitted in some manner by receiving new patient record data including measurements of maternal blood pressure and fetal heart rate (e.g. by pulling data from a patient’s chart, speaking with a patient, observing readouts of measurement devices, etc.) and applying learned numerical parameters of the predictive model to the new patient record data to predict risk scores indicative of the probability of one or more maternity-related and/or fetus-related healthcare events. The clinician could then perform an exploratory data analysis by using statistical calculations to compute incidence and prevalence values for patient cohorts and compare the cohorts to observe variations in results across the cohorts, e.g. by visually generating graphical presentations in a report allowing them and/or their colleagues to assess health risk factors and make clinical intervention strategy determinations. Accordingly, claim 1 recites an abstract idea in the form of mathematical concepts and a certain method of organizing human activity. Claim 8 recites substantially similar subject matter as claim 1 and is also found to recite an abstract idea under the same analysis. Claim 14 similarly recites: A system comprising: a processor executing a pre-trained machine learning model stored in a computer-readable memory, wherein the system is configured to: receive via an input module, a patient record data associated with a patient, wherein the patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; apply learned parameters of the pre-trained machine learning model, to the patient record data; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk score each represent a probability of a maternity-related healthcare event of the patient and the fetal risk score each represent a probability of a fetus-related healthcare event of the fetus of the patient; compute by a statistical technique, an overall risk score from at least one selected from the maternal risk score and the fetal risk score, wherein the overall risk score is configured as a maternal and infant health insights and cognitive intelligence (MIHIC) score; receive via an interactive dashboard, a feed-back from a clinician relating to an observed health event of the patient or the fetus of the patient; input the pre-trained machine learning model with the feed-back; and update a database with the patient record data; and wherein the pre-trained machine learning model is a self-learning model comprising a feed-back layer that enables the pre-trained machine learning model to learn continuously from the patient record data and the feed-back from the clinician to continually improve the prediction of at least one selected from the maternal risk score of the one or more of the first plurality of risk factors and the fetal risk score of the one or more of the second plurality of health risk factors. But for the recitation of generic computer components like a processor, memory, machine learning, and an interactive dashboard, the italicized functions, when considered as a whole, describe a clinical data analysis and predictive risk scoring operation that could be achieved via mathematical concepts and/or by a human actor such as a clinician or other medical professional managing their personal behavior and/or interactions with others. For example, a clinician could execute a predictive model previously trained/fitted in some manner by receiving new patient record data including measurements of maternal blood pressure and fetal heart rate (e.g. by pulling data from a patient’s chart, speaking with a patient, observing readouts of measurement devices, etc.), applying learned numerical parameters of the predictive model to the new patient record data to predict risk scores indicative of the probability of one or more maternity-related and/or fetus-related healthcare events, and computing an overall risk score using a statistical technique. The clinician could then observe actual outcomes associated with the patient, update the patient’s records in a database (e.g. in their chart or a filing cabinet), update the model’s parameters to incorporate the known outcome so that future risk score predictions made by the model are improved. Accordingly, claim 14 recites an abstract idea in the form of mathematical concepts and a certain method of organizing human activity. Dependent claims 2-7, 9-13, and 15-20 inherit the limitations that recite an abstract idea from their dependence on claims 1, 8, and 14, respectively, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 2-7, 9-13, 15-16, and 18-20 recite additional limitations that further describe the abstract idea identified in the independent claims. Specifically, claims 2-3, 9, and 16 describe more specific types of patient record data utilized, each of which are types of data that a clinician or other medical professional would be capable of accessing and analyzing via mathematical techniques to make clinical decisions. Claim 4 specifies that the model enables exploration and correlation of the patient and fetal data associated with the risk factors, which is a capability / intended result that would be reflected in a predictive model generated and fitted by a clinician or other human actor. Claims 5-6, 10, 12-13, and 20 describe calculation of additional risk scores of various types, each of which are types of risks that a clinician would be capable of making clinical predictions about via mathematical calculations. Claim 7 further describes the types of visualizations, each of which are types of charts that a clinician would be capable of generating as part of a report. Claims 9 and 15 recite “wherein” clauses describing how the pre-trained machine learning model was previously obtained, such that they do not positively limit the scope of their respective parent claims as explained above. Examiner further notes that a clinician would be capable of fitting a predictive model by acquiring a plurality of text- and image-based patient records, identifying record types and segregating the data into structured and unstructured data, pre-processing/formatting the data (e.g. by removing outliers, converting units, editing notes to use standardized terminologies, etc.), defining the intended architecture of the predictive model, and fitting the model with the pre-processed data to learn optimized parameters for the model. Claim 11 recites various steps relating to receiving and incorporating feedback from a clinician into the model to improve its performance, which may be achieved by a clinician as explained above for similar limitations in claim 14. Claims 17-19 describe the performance of a statistical exploration of the health risk factors and generation of synthesized results with accompanying graphical presentations for enabling clinical decision-making, which can be achieved by a clinician performing mathematical calculations and generating a visual report as explained for similar limitations in claim 1 above. However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 1, 8, and 14 do not include additional elements that integrate the abstract idea into a practical application. The additional elements of claim 1 include a processor; a computer-readable memory; specifying that the pre-trained model is a machine learning model comprising a neural network comprising a hidden layer and a feed-back layer; an exploratory analysis module; and a graphical dashboard. The additional elements of claim 8 include specifying that the pre-trained model is a machine learning model; an exploratory analysis module; and a graphical dashboard. The additional elements of claim 14 include a processor; a computer-readable memory; specifying that the pre-trained model is a machine learning model that is self-learning and comprises a feedback layer; an input module; and an interactive dashboard. These additional elements, when considered in the context of each claim as a whole, merely serve to automate mathematical risk calculation operations that could occur by and among human actors (as described above), and thus amount to instructions to “apply” the abstract idea using generic computer components (see MPEP 2106.05(f)). For example, a clinician can obtain patient record data by accessing databases and interacting with others (e.g. patients and/or colleagues), process and analyze the data with a pre-trained predictive model to calculate various risk scores for a patient and/or their fetus, perform statistical analyses to visually compare patient cohorts, and provide feedback to update the predictive model. The use of a processor, memory, exploratory analysis module, as well as specifying that the predictive model is a high-level machine learning model such as a neural network with self-learning capabilities and various layers merely digitizes and/or automates these otherwise-abstract operations such that they occur in a computerized environment (i.e. merely using computers as tools with which to implement the abstract idea). Similarly, the use of an input module and dashboard to obtain information and visually present analysis results amount to the digitization of otherwise-abstract data inputting and outputting functions of a mathematical analysis such that they occur in an electronic environment. In other words, the claims appear to utilize high-level computing and machine learning components as tools with which to digitize and/or automate the otherwise-abstract business practice of making clinical predictions and performing clinical analyses that enable clinical decision-making, rather than providing technical improvements to any underlying technical field like the operation of a computer or the specific architecture or training methods of machine learning. Accordingly, these high-level additional elements amount to mere instructions to apply the abstract idea on a computer, and claims 1, 8, and 14 as a whole are each directed to an abstract idea without integration into a practical application. The judicial exception recited in dependent claims 2-7, 9-13, and 15-20 is also not integrated into a practical application under a similar analysis as above. The functions of claims 2-6, 9-13, and 15-20 are performed with the same additional elements introduced in the independent claims, without introducing any new additional elements of their own, and accordingly also amount to mere instructions to apply the abstract idea on these same additional elements. Claim 7 introduces a display device for presenting visualizations, which again merely digitizes the output of mathematical clinical analyses that a clinician could otherwise achieve by managing their personal behavior to generate such charts in a report. Accordingly, the additional elements of claims 1-20 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1-20 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a processor, memory, input module, exploratory analysis module, machine learning model comprising a neural network with hidden and feedback layers, and dashboards/display devices for performing the executing, receiving, applying, predicting, performing, computing, generating, rendering, inputting, updating, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes Pgs 9-10 of Applicant’s specification, where the various modules and algorithms of the invention appear to be embodied as software executed with various known computing devices with built-in input and output devices (e.g. smart phones, iPads, desktop/personal computers, etc.), leaving one of ordinary skill in the art to understand that any generic computer processor elements capable of executing software code may be used. Examiner further notes that various machine learning techniques, including neural networks, are exemplarily described on Pgs 6, 8, 17, & 22-25 of Applicant’s specification as being known alternative choices of algorithms for analysis of data, leaving one of ordinary skill in the art to understand that many types of known machine learning models (including various known types of neural networks) may be utilized to implement the invention. Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the computer components and machine learning implementation in combination is to digitize and/or automate a clinical data analysis and predictive risk scoring operation that could otherwise be achieved via mathematical concepts and as a certain method of organizing human activity. Further, Examiner notes the combination of a computing device with a processor and memory for inputting data and executing software functions such as machine learning analysis of clinical data for the purpose of displaying analysis results at a dashboard interface is well-understood, routine, and conventional, as evidenced by at least Figs. 1-2 & 5 of Amarasingham et al. (US 20150213225 A1); abstract & Figs 1 & 4 of Pengetnze et al. (US 20190122770 A1); Figs. 1A-B, [0031]-[0033], & [0096] of Roberts et al. (US 20190133536 A1); and abstract & Figs. 1-5 Chowdhry et al. (US 20210098133 A1). Thus, when considered as a whole and in combination, claims 1-20 are not patent eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 14, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham et al. (US 20150213225 A1) in view of Tupin, JR. et al. (US 20130245436 A1). Claim 14 Amarasingham teaches a system comprising: a processor executing a pre-trained machine learning model stored in a computer-readable memory (Amarasingham abstract, [0029], [0042], [0064], noting a computer system for evaluating clinical risk via trained machine learning predictive models, i.e. a processor-based computer system that executes models organized/stored in memory), the computer-executable components comprising: receive via an input module, a patient record data associated with a patient, wherein the patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data (Amarasingham [0048], noting patient data (i.e. a new patient record data associated with a particular patient) is continually received by the system; such data can include clinical data like patient blood pressure as in [0045] & [0062] as well as a variety of other types of patient-related data as in [0030]-[0032]); apply learned parameters of the pre-trained machine learning model, to the patient record data; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a (Amarasingham Fig. 5, [0056], [0061]-[0062], noting disease/risk logic module that includes a variety of health condition-specific risk models (e.g. machine learning models as in [0058] & [0064]) to assess patterns in the input patient data to determine various risk scores for one or more specific health conditions/events for each patient); and receive via an interactive dashboard, a feed-back from a clinician relating to an observed health event of the patient (Amarasingham [0079], noting a clinician user may input observations and comments about the patient, such as confirmation or denial of an identified adverse event, i.e. feedback relating to an observed health event of the patient); input the pre-trained machine learning model with the feed-back (Amarasingham [0064], noting the predictive machine learning models can be retrained and updated over time based on actual observed outcomes of an event for a patient (e.g. as input by a clinician as in [0079]) compared to the model’s prediction); and update a database with the patient record data (Amarasingham [0048], noting new patient data is continuously received and processed by the system such that any new patient data would update the database); and wherein the pre-trained machine learning model is a self-learning model comprising a feed-back layer that enables the pre-trained machine learning model to learn continuously from the patient record data and the feed-back from the clinician to continually improve the prediction of at least one selected from the of the one or more of the second plurality of health risk factors (Amarasingham [0064], noting the predictive machine learning models are self-learning and continuously updated over time based on actual observed outcomes of an event (i.e. feedback) and new patient data). In summary, Amarasingham teaches a system that ingests and preprocesses data from a variety of sources to generate predictive models and analyzes data from a new patient with the predictive models to determine condition- or event-specific risk scores for the patient. Though Amarasingham does contemplate the use of condition-specific risk models (see [0061]-[0062], [0078], where specific examples of predictive model types for various healthcare conditions/events are listed, and it is noted that “other” types are also contemplated), it makes no specific mention of a patient being a maternal woman and the risk scores being maternal and fetal risk scores, nor or calculation of an overall risk score in the manner described. Accordingly, Amarasingham fails to explicitly disclose the following aspects of the instant claim: the received patient record comprising a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; predicting at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk score each represent a probability of a maternity-related healthcare event of the patient and the fetal risk score each represent a probability of a fetus-related healthcare event of the fetus of the patient; and computing by a statistical technique an overall risk score from at least one selected from the maternal risk score and the fetal risk score, wherein the overall risk score is configured as a maternal and infant health insights and cognitive intelligence (MIHIC) score. However, Tupin teaches an analogous patient monitoring and data analysis system that monitors clinical data (e.g. blood pressure) for a maternal woman as well as clinical data (e.g. fetal heart rate) for a fetus of the maternal woman (Tupin [0021], noting fetal heart rate and maternal blood pressure as two parameters monitored by the system) and uses such data to calculate a number of individual fetal and/or maternal health indicators that may be integrated into an aggregate index quantifying when an adverse pregnancy outcome or complication is likely to be occurring (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and integrated into an aggregate NMI (i.e. overall risk score) personalized to a single pregnancy to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.; the use of software algorithms to integrate multiple individual NMIs into an aggregate NMI is considered equivalent to using a second model comprising a statistical technique to calculate an overall MIHIC risk score because software algorithms employ mathematical and/or statistical calculations to analyze data and provide outputs, and no specific statistical techniques or models are recited in the claim). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the generic condition-specific risk prediction system of Amarasingham to specifically apply to pregnancy-related maternal and fetal risks as well as to include calculation of an overall score quantifying pregnancy outcomes based on both maternal and fetal risks as in Tupin because Tupin shows that there is a great need for pregnancy-related monitoring and risk quantification of both mother and baby so that appropriate pre- and post-natal care may be tailored to the pregnancy (see Tupin [0007]) and providing a combined/aggregate fetal and maternal index helps provide useful information concerning both fetal and maternal health during pregnancy and delivery and enables analysis of deviations of the index from predicted or expected index ranges to provide early notice to the doctor and mother so that needed medical care may be provided to avoid complications (as suggested by Tupin [0114] & [0117]). The result of such a combination would include the use of condition-specific predictive models (as in Amarasingham) that can analyze maternal data like blood pressure and fetal data like heart rate (as in Tupin) and output condition-specific risk scores for both maternal risk factors and fetal risk factors, then aggregate the risk scores into an overall MIHIC risk score. Claim 16 Amarasingham in view of Tupin teaches the system of claim 14, and the combination further teaches wherein patient record data comprises one or more of a demographic data, a medical data, a social data, a genomic data, an omics data, and a genetic data (Amarasingham [0030]-[0032], noting patient data can include demographic data, medical data, social data, and family history and genetic data (considered to include genomic, omics, and genetic data)). Claim 20 Amarasingham in view of Tupin teaches the system of claim 14, and the combination further teaches wherein the system further predicts infant health risk factors (Tupin [0120], noting adaptation of the monitoring and predictive methods to infant or newborns); and wherein the maternal health risk factors comprises one or more of miscarriage, anemia, gestational diabetes, diabetes mellitus during pregnancy, gestational hypertension, preeclampsia, preterm labor, preterm birth, preterm premature rupture of membranes (PPROM), placental abruption, placenta previa, placenta accreta, placenta increta, caesarean delivery, sepsis, venous thromboembolic event (VTE), postpartum hemorrhage, postpartum depression, uterine rupture, intensive care unit (ICU) admission for mother, maternal death, multiple births; the fetal health risk factors comprises one or more of stillbirth, fetal growth restriction (IUGR), macrosomia, congenital anomaly, neural-tube defect (spina bifida), anencephaly, aneuploidy, drug-induced abnormality, intra-amniotic infection, chorioamnionitis, birth injury; and the infant health risk factors comprises one or more of low birth weight, excessive birth weight, neonatal anemia, neonatal hypoglycemia, intraventricular hemorrhage (IVH), respiratory distress syndrome (RDS), bronchopulmonary dysplasia (BPD), necrotizing enterocolitis (NEC), retinopathy of prematurity, neonatal blindness risk, neonatal sepsis, neonatal jaundice, neonatal death, newborn encephalopathy, hypoxic-ischemic encephalopathy (HIE), neurodevelopmental delay, cerebral palsy, neonatal intensive care unit (NICU) admission (Tupin [0114]-[0120], noting the predicted indices provide early indications of potential pregnancy problems or adverse events like premature birth, cardiac issues, SIDS, etc.). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham and Tupin, as applied to claim 14 above, and further in view of Lapointe et al. (US 6556977 B1). Claim 15 Amarasingham in view of Tupin teaches the system of claim 14, and the combination further teaches wherein the pre-trained machine learning model is obtained by: acquiring a plurality of patient record data, wherein the patient record data comprises a text data and an image data (Amarasingham Fig. 5, [0029]-[0036], [0042], [0053], noting the system obtains clinical and non-clinical patient data in various electronic formats from multiple sources; [0030] specifically notes that acquired data includes text-based data like medical history, dictated clinical notes and records, etc., while Figs. 4 & 10 as well as [0030], [0045], [0087], & [0103] further show that patient image data and/or radiological imaging exams (i.e. image data) are included as acquired data types); identifying a data type of the patient record data; segregating the patient record data into a structured data and an unstructured data; pre-processing by the processor, the structured data and the unstructured data and generate preprocessed data (Amarasingham Fig. 6, [0053]-[0054], [0057]-[0058], noting data extraction and cleansing processes that preprocess the obtained data differently based on whether it is structured or unstructured, indicating that such data types are identified and segregated for pre-processing); defining an architecture of a machine learning model, (Amarasingham [0064], noting the predictive machine learning models are trained and retrained with the processed patient data, i.e. an architecture of the model is defined and model-specific parameters are learned). In summary, the present combination teaches a system that ingests and preprocesses data from a variety of sources to generate predictive models and analyze data from a new patient with the predictive models to determine condition- or event-specific risk scores for the patient (e.g. maternity-related risks). Though the trained models are disclosed as utilizing machine learning (see [0058], [0064]), they are not explicitly disclosed as including a neural network comprising a hidden layer and the feedback layer. However, Lapointe teaches that a widely-used decision support machine learning model that is specifically applicable to maternity-related risk prediction model is a neural network with a hidden layer and feedback layer (Lapointe Col2 10-48, Col6 L27-55, Col24 L6-23). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the self-learning model of the combination to specifically include a neural network as in Lapointe in order to utilize a widely-available model type that is particularly suited to clinical prediction and can be iteratively retrained at any time to improve accuracy (as suggested by Lapointe Col2 L10-48, Col24 L6-23). Claims 8, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham in view of Tupin and Edwards et al. (US 20090106004 A1). Claim 8 Amarasingham teaches a method comprising: (Amarasingham abstract, [0029], [0042], [0064], noting a computer system for performing a method of evaluating clinical risk via trained machine learning predictive models); receiving by a pre-trained machine learning model, a new patient record data associated with a patient; wherein the new patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data (Amarasingham [0048], noting patient data (i.e. a new patient record data associated with a particular patient) is continually received; such data can include clinical data like patient blood pressure as in [0045] & [0062] as well as a variety of other types of patient-related data as in [0030]-[0032]); applying learned parameters of the pre-trained machine learning model, to the new patient record data; predicting by the pre-trained machine learning model and based on the learned parameters, at least one selected from a (Amarasingham Fig. 5, [0056], [0061]-[0062], noting disease/risk logic module that includes a variety of health condition-specific risk models to assess patterns in the input patient data to determine various risk scores for one or more specific health conditions/events for each patient; see also [0057] & [0074]-[0075], noting the determined model outputs can represent a risk of a patient having a given outcome); performing by an exploratory data analysis module, a statistical exploration of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors to compute incidence (Amarasingham [0058], noting the analytics of the system can include use of a statistically-based learning model to develop inferences based on data patterns and relationships, i.e. performance of an exploratory analysis. See also [0067], noting analytic outputs of the system include graphical representations of data for a patient population, comparison of incidence rates of predicted events to the rates of prediction, and other indicators); and rendering by a graphical dashboard, the synthesized results of the exploratory data analysis as graphical presentations enabling a clinician for interpretation and assessment of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors; and wherein the exploratory data analysis is configured to support the clinician in designing one or more of a prevention strategy and an intervention strategy (Amarasingham [0066]-[0068], noting system analytics are graphically rendered at an interactive dashboard for review by a clinical user (e.g. a member of a clinical intervention team), considered equivalent to “enabling” a clinician to interpret and assess at least one health risk factor and facilitating “support” of care decisions). In summary, Amarasingham teaches a system that ingests and preprocesses data from a variety of sources to generate predictive models and analyzes data from a new patient with the predictive models to determine condition- or event-specific risk scores for the patient. Though Amarasingham does contemplate the use of condition-specific risk models (see [0061]-[0062], [0078], where specific examples of predictive model types for various healthcare conditions/events are listed, and it is noted that “other” types are also contemplated), it makes no specific mention of a patient being a maternal woman and the risk scores being maternal and fetal risk scores. Further, though the analytics produced by the system include population-level data and incidence rates of a predicted event (see [0067]), there is no explicit mention of computing prevalence of an event, nor of showing variations in analytic results across multiple patient cohorts. Accordingly, Amarasingham fails to explicitly disclose the following aspects of the instant claim: the received new patient record comprising a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; predicting by the pre-trained machine learning model and based on the learned parameters, at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk scores each represent a probability of a maternity-related healthcare event of the patient and the fetal risk scores each represent a probability of a fetus-related healthcare event of the fetus of the patient; compute incidence and prevalence values for one or more patient cohorts; and generate synthesized results for showing variations in a behavior of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors across the patient cohorts. However, Tupin teaches an analogous patient monitoring and data analysis system that monitors clinical data (e.g. blood pressure) for a maternal woman as well as clinical data (e.g. fetal heart rate) for a fetus of the maternal woman (Tupin [0021], noting fetal heart rate and maternal blood pressure as two parameters monitored by the system) and uses such data to calculate a number of individual fetal and/or maternal health indicators that are integrated into an overall risk score quantifying when an adverse pregnancy outcome or complication is likely to be occurring (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and aggregated to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the generic condition-specific risk prediction system of Amarasingham to specifically apply to pregnancy-related maternal and fetal risks based on maternal and fetal vital sign data as in Tupin because Tupin shows that there is a great need for pregnancy-related monitoring and risk quantification of both mother and baby so that appropriate pre- and post-natal care may be tailored to the pregnancy (see Tupin [0007]). The result of such a combination would include the development of condition-specific predictive models (as in Amarasingham) that can analyze maternal data like blood pressure and fetal data like heart rate (as in Tupin) and output condition-specific risk scores for both maternal risk factors and fetal risk factors. Additionally, Edwards teaches a clinical analytics platform that facilitates computation and comparison of metrics like incidence and prevalence of clinical outcomes across different patient subpopulations (Edwards [0028], [0097], [0101]-[0102]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the various clinical analytics available at the dashboard of the combination to include prevalence and facilitate comparison across multiple cohorts as in Edwards in order to expand the type and specificity of clinically useful analytics provided to a user so that the user is more informed and improved patient outcomes and intervention investment guidance is facilitated (as suggested by Edwards [0028]). Claim 10 Amarasingham in view of Tupin and Edwards teaches the method of claim 8, and the combination further teaches computing, by a statistical technique, an overall risk score from at least one selected from the maternal risk score and the fetal risk score, wherein the overall risk score is configured as a maternal and infant health insights and cognitive intelligence (MIHIC) score (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and integrated into an aggregate NMI (i.e. overall risk score) personalized to a single pregnancy to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.; the use of software algorithms to integrate multiple individual NMIs into an aggregate NMI is considered equivalent to using a statistical technique to calculate an overall MIHIC risk score because software algorithms employ mathematical and/or statistical calculations to analyze data and provide outputs, and no specific statistical techniques or models are recited in the claim). Claim 12 Amarasingham in view of Tupin and Edwards teaches the method of claim 8, and the combination further teaches calculating, an overall risk score from at least one selected from the maternal risk score and the fetal risk score using a statistical technique, wherein the overall risk score is configured as a maternal and infant health insights and cognitive intelligence (MIHIC) score, and wherein MIHIC score represents a quantification of risk for pregnancy outcome (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and integrated into an aggregate NMI (i.e. overall risk score) personalized to a single pregnancy to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.; the use of software algorithms to integrate multiple individual NMIs into an aggregate NMI is considered equivalent to using a statistical technique to calculate an overall MIHIC risk score because software algorithms employ mathematical and/or statistical calculations to analyze data and provide outputs, and no specific statistical techniques or models are recited in the claim). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham in view of Tupin and Edwards as applied to claims 8 and 12, and further in view of Roberts et al. (US 20190133536 A1). Claim 13 Amarasingham in view of Tupin and Edwards teaches the method of claim 12, showing that different condition- and event-specific risk scores and overall risk score types can be calculated. However, the present combination fails to explicitly disclose generating the overall risk score for one or more of postpartum depression and for a caesarian delivery. However, Roberts teaches an analogous pregnancy-related risk prediction system that predicts a risk score for caesarian delivery (Roberts abstract). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the overall pregnancy-related risk score of the combination such that it is related to the specific condition of caesarian delivery as in Roberts because C-section deliveries are known to increase the risk of complications and providing a risk score related to a C-section would help physicians to more accurately determine whether or not to recommend C-sections to patients in labor (as suggested by Roberts [0002]). Claims 1-5, 7, 9, 11, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham in view of Tupin, Lapointe, and Edwards. Claim 1 Amarasingham teaches a system comprising: a processor executing a pre-trained machine learning model stored in a computer-readable memory (Amarasingham abstract, [0029], [0042], [0064], noting a computer system for evaluating clinical risk via trained machine learning predictive models, i.e. a processor-based computer system that executes models organized/stored in memory); wherein the pre-trained machine learning model is obtained by: acquiring a plurality of patient record data from a database, wherein the patient record data comprises a text data and an image data (Amarasingham Fig. 5, [0029]-[0036], [0042], [0053], noting the system obtains clinical and non-clinical patient data in various electronic formats from multiple sources; [0030] specifically notes that acquired data includes text-based data like medical history, dictated clinical notes and records, etc., while Figs. 4 & 10 as well as [0030], [0045], [0087], & [0103] further show that patient image data and/or radiological imaging exams (i.e. image data) are included as acquired data types); identifying a data type of the patient record data; segregating, the patient record data into a structured data and an unstructured data; pre-processing the structured data and the unstructured data and generate preprocessed data (Amarasingham Fig. 6, [0053]-[0054], [0057]-[0058], noting data extraction and cleansing processes that preprocess the obtained data differently based on whether it is structured or unstructured, indicating that such data types are identified and segregated for pre-processing); defining an architecture of a machine learning model, (Amarasingham [0064], noting the predictive machine learning models are trained and retrained with the processed patient data, i.e. an architecture of the model is defined and model-specific parameters are learned); wherein the system is configured to: receive by the pre-trained machine learning model, a new patient record data associated with a patient, wherein the new patient record data comprises a first clinical data comprising patient data comprising a patient blood pressure, a second clinical data (Amarasingham [0048], noting patient data (i.e. a new patient record data associated with a particular patient) is continually received; such data can include clinical data like patient blood pressure as in [0045] & [0062] as well as a variety of other types of patient-related data as in [0030]-[0032]); apply learned parameters of the pre-trained machine learning model, to the new patient record data; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a (Amarasingham Fig. 5, [0056], [0061]-[0062], noting disease/risk logic module that includes a variety of health condition-specific risk models to assess patterns in the input patient data to determine various risk scores for one or more specific health conditions/events for each patient; see also [0057] & [0074]-[0075], noting the determined model outputs can represent a risk of a patient having a given outcome); perform an exploratory data analysis by an exploratory data analysis module executed by the processor, a statistical exploration of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors to compute incidence (Amarasingham [0058], noting the analytics of the system can include use of a statistically-based learning model to develop inferences based on data patterns and relationships, i.e. performance of an exploratory analysis. See also [0067], noting analytic outputs of the system include graphical representations of data for a patient population, comparison of incidence rates of predicted events to the rates of prediction, and other indicators); and render by a graphical dashboard, the synthesized results of the exploratory data analysis as graphical presentations enabling a clinician for interpretation and assessment of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors; and wherein the exploratory data analysis is configured to support the clinician in designing one or more of a prevention strategy and an intervention strategy (Amarasingham [0066]-[0068], noting system analytics are graphically rendered at an interactive dashboard for review by a clinical user (e.g. a member of a clinical intervention team), considered equivalent to “enabling” a clinician to interpret and assess at least one health risk factor and facilitating “support” of care decisions). In summary, Amarasingham teaches a system that ingests and preprocesses data from a variety of sources to generate predictive models and analyzes data from a new patient with the predictive models to determine condition- or event-specific risk scores for the patient. Though the trained models are disclosed as utilizing machine learning (see [0058], [0064]), they are not explicitly disclosed as including a neural network comprising a hidden layer and a feedback layer. Further, though Amarasingham does contemplate the use of condition-specific risk models (see [0061]-[0062], [0078], where specific examples of predictive model types for various healthcare conditions/events are listed, and it is noted that “other” types are also contemplated), it makes no specific mention of a patient being a maternal woman and the risk scores being maternal and fetal risk scores. Finally, though the analytics produced by the system include population-level data and incidence rates of a predicted event (see [0067]), there is no explicit mention of computing prevalence of an event, nor of showing variations in analytic results across multiple patient cohorts. Accordingly, Amarasingham fails to explicitly disclose the following aspects of the instant claim: wherein the machine learning model comprises a neural network comprising a hidden layer and a feed-back layer; the received new patient record comprising a second clinical data comprising fetal data comprising a fetal heart rate, wherein the patient is a maternal woman; predict by the pre-trained machine learning model and based on the learned parameters, at least one selected from a maternal risk score for one or more health risk factors of a first plurality of health risk factors associated with the patient, a fetal risk score for one or more health risk factors of a second plurality of health risk factors associated with a fetus of the patient, wherein the maternal risk scores each represent a probability of a maternity-related healthcare event of the patient and the fetal risk scores each represent a probability of a fetus-related healthcare event of the fetus of the patient; compute incidence and prevalence values for one or more patient cohorts; and generate synthesized results for showing variations in a behavior of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors across the patient cohorts. However, Tupin teaches an analogous patient monitoring and data analysis system that monitors clinical data (e.g. blood pressure) for a maternal woman as well as clinical data (e.g. fetal heart rate) for a fetus of the maternal woman (Tupin [0021], noting fetal heart rate and maternal blood pressure as two parameters monitored by the system) and uses such data to calculate a number of individual fetal and/or maternal health indicators that are integrated into an overall risk score quantifying when an adverse pregnancy outcome or complication is likely to be occurring (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and aggregated to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the generic condition-specific risk prediction system of Amarasingham to specifically apply to pregnancy-related maternal and fetal risks based on maternal and fetal vital sign data as in Tupin because Tupin shows that there is a great need for pregnancy-related monitoring and risk quantification of both mother and baby so that appropriate pre- and post-natal care may be tailored to the pregnancy (see Tupin [0007]). The result of such a combination would include the development of condition-specific predictive models (as in Amarasingham) that can analyze maternal data like blood pressure and fetal data like heart rate (as in Tupin) and output condition-specific risk scores for both maternal risk factors and fetal risk factors. Further, Lapointe teaches that a widely-used decision support machine learning model that is specifically applicable to maternity-related risk prediction model is a neural network with a hidden layer and feedback layer (Lapointe Col2 10-48, Col6 L27-55, Col24 L6-23). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the self-learning model of the combination to specifically include a neural network as in Lapointe in order to utilize a widely-available model type that is particularly suited to clinical prediction and can be iteratively retrained at any time to improve accuracy (as suggested by Lapointe Col2 L10-48, Col24 L6-23). Additionally, Edwards teaches a clinical analytics platform that facilitates computation and comparison of metrics like incidence and prevalence of clinical outcomes across different patient subpopulations (Edwards [0028], [0097], [0101]-[0102]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the various clinical analytics available at the dashboard of the combination to include prevalence and facilitate comparison across multiple cohorts as in Edwards in order to expand the type and specificity of clinically useful analytics provided to a user so that the user is more informed and improved patient outcomes and intervention investment guidance is facilitated (as suggested by Edwards [0028]). Claim 2 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 1, and the combination further teaches wherein the patient record data further comprises a demographic data, a medical data, social data, genomic data, omics data, and a genetic data (Amarasingham [0030]-[0032], noting patient data can include demographic data, medical data, social data, and family history and genetic data (considered to include genomic, omics, and genetic data)). Claim 3 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 1, and the combination further teaches wherein the new patient record data further comprises a patient self-generated data, wherein the patient self-generated data comprises social media data, lifestyle data, and data from wearable devices (Amarasingham [0030]-[0032], noting patient data can include patient-entered social media data, behavioral/lifestyle data, and data from networked medical/monitoring devices like blood pressure devices and glucose meters (i.e. data from wearable devices like the example electronic device worn by a patient in [0086]); see also [0038], noting data is obtained from RFID tags worn by patients, i.e. wearable devices). Claim 4 Note: the claim recites “wherein the machine learning model further enables exploration and correlation of the patient data and the fetal data associated with the first plurality of health risk factors and the second plurality of health risk factors.” This limitation merely describes an intended result or capability of the machine learning model without positively reciting any additional structural or functional aspects of the invention, and is thus not considered patentably limiting (see MPEP 2111.04). However, in the interest of compact prosecution, this limitation has been addressed with prior art below. Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 1, and the combination further teaches wherein the machine learning model further enables exploration and correlation of the patient data and the fetal data associated with the first plurality of health risk factors and the second plurality of health risk factors (Amarasingham [0064], noting the predictive models are tuned based on actual observed outcomes so that the contribution of various risk factors to the calculated risk scores may be adjusted, such that the model can be said to “enable” exploration and correlation of the various patient data (including maternal and fetal data when considered in the context of the combination with Tupin) with the health risk factors). Claim 5 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 1, and the combination further teaches wherein the system is operable to calculate, an overall risk score from at least one selected from the maternal risk score and the fetal risk score using a statistical technique, wherein the overall risk score is configured as a maternal and infant health insights and cognitive intelligence (MIHIC) score, and wherein MIHIC score represents a quantification of risk for pregnancy outcome (Tupin [0086], [0114]-[0119], noting individual NMIs based on fetal and/or maternal clinical data (i.e. maternal and fetal risk scores for each of a plurality of risk factors) are developed and integrated into an aggregate NMI (i.e. overall risk score) personalized to a single pregnancy to provide early indications of potential pregnancy problems or adverse events like premature birth, meconium aspiration, cardiac issues, etc.; the use of software algorithms to integrate multiple individual NMIs into an aggregate NMI is considered equivalent to using a statistical technique to calculate an overall MIHIC risk score because software algorithms employ mathematical and/or statistical calculations to analyze data and provide outputs, and no specific statistical techniques or models are recited in the claim). Claim 7 Note: the claim recites specific types of chart visualizations presented on a display by the system. However, the specific type of chart does not alter or affect the positively recited function of displaying visualizations at a display, and the particular types of charts are considered non-functional descriptive language and are not patentably limiting in this case (see MPEP 2111.05). Accordingly, the claim language is met by the prior art teaching visualization of any kind of output at a display. Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 1, and the combination further teaches wherein the system presents, on a display device, visualizations using one or more speedometer chart, gauge meter chart and horizontal bar chart (Amarasingham [0048], [0066]-[0068], [0075], noting the system can display various graphical report or charts relating to the analytics). Claim 9 Amarasingham in view of Tupin and Edwards teaches the method of claim 8, and the combination further teaches wherein the pre-trained machine learning model is obtained by: receiving by a processor, a patient record data associated with a first patient, wherein the patient record data comprises a text data and an image data (Amarasingham Fig. 5, [0029]-[0036], [0042], [0053], noting the system obtains clinical and non-clinical patient data in various electronic formats from multiple sources; [0030] specifically notes that acquired data includes text-based data like medical history, dictated clinical notes and records, etc., while Figs. 4 & 10 as well as [0030], [0045], [0087], & [0103] further show that patient image data and/or radiological imaging exams (i.e. image data) are included as acquired data types); identifying by the processor, a data type of the patient record data; segregating the patient record data into a structured data and an unstructured data; pre-processing by the processor, the structured data and the unstructured data and generate preprocessed data (Amarasingham Fig. 6, [0053]-[0054], [0057]-[0058], noting data extraction and cleansing processes that preprocess the obtained data differently based on whether it is structured or unstructured, indicating that such data types are identified and segregated for pre-processing); defining an architecture of a machine learning model, (Amarasingham [0064], noting the predictive machine learning models are trained and retrained with the processed patient data, i.e. an architecture of the model is defined and model-specific parameters are learned); wherein the patient record data further comprises a demographic data, a medical data, social data, genomic data, omics data, and a genetic data (Amarasingham [0030]-[0032], noting patient data can include demographic data, medical data, social data, and family history and genetic data (considered to include genomic, omics, and genetic data)); and wherein the new patient record data further comprises a patient self-generated data, wherein the patient self-generated data comprises social media data, lifestyle data, and data from wearable devices (Amarasingham [0030]-[0032], noting patient data can include patient-entered social media data, behavioral/lifestyle data, and data from networked medical/monitoring devices like blood pressure devices and glucose meters (i.e. data from wearable devices like the example electronic device worn by a patient in [0086]); see also [0038], noting data is obtained from RFID tags worn by patients, i.e. wearable devices). In summary, the present combination teaches a method for ingesting and preprocessing data from a variety of sources to generate predictive models and analyze data from a new patient with the predictive models to determine condition- or event-specific risk scores for the patient (e.g. maternity-related risks). Though the trained models are disclosed as utilizing machine learning (see [0058], [0064]), they are not explicitly disclosed as including a neural network comprising a hidden layer and a feedback layer. However, Lapointe teaches that a widely-used decision support machine learning model that is specifically applicable to maternity-related risk prediction model is a neural network with a hidden layer and feedback layer (Lapointe Col2 10-48, Col6 L27-55, Col24 L6-23). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the self-learning model of the combination to specifically include a neural network as in Lapointe in order to utilize a widely-available model type that is particularly suited to clinical prediction and can be iteratively retrained at any time to improve accuracy (as suggested by Lapointe Col2 L10-48, Col24 L6-23). Claim 11 Amarasingham in view of Tupin, Edwards, and Lapointe teaches the method of claim 9, and the combination further teaches wherein the pre-trained machine learning model is a self-learning model comprising the feed-back layer that enables the pre-trained machine learning model to learn continuously from the new patient record data and a feed-back from the clinician to continually improve a prediction of at least one selected from the maternal risk score of the one or more of the first plurality of health risk factors and the fetal risk score of the one or more of the second plurality of health risk factors (Amarasingham [0064], noting the predictive machine learning models (e.g. including a neural network with various layers when considered in the context of the combination with Lapointe) are self-learning and continuously updated over time based on new patient data and actual observed outcomes of an event (e.g. feedback as input by a clinician as in [0079]). See also Lapointe Col6 L27-50, Col11 L64 – Col7 L4, Col24 L6-23, noting neural network maternity-related prediction models may be updated by feedback manually entered by a user). Claim 17 Amarasingham in view of Tupin and Lapointe teaches the system of claim 15, and the combination further teaches an exploratory analysis module operable to perform a statistical exploration of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors to compute incidence (Amarasingham [0058], noting the analytics of the system can include use of a statistically-based learning model to develop inferences based on data patterns and relationships, i.e. performance of an exploratory analysis. See also [0067], noting analytic outputs of the system include graphical representations of data for a patient population, comparison of incidence rates of predicted events to the rates of prediction, and other indicators). Though the present combination teaches computation and display of clinical analytics including population-level data and incidence rates of a predicted event (see Amarasingham [0067]), there is no explicit disclosure of computing prevalence of an event, nor of showing variations in analytic results across multiple patient cohorts. However, Edwards teaches a clinical analytics platform that facilitates computation and comparison of metrics like incidence and prevalence of clinical outcomes across different patient subpopulations (Edwards [0028], [0097], [0101]-[0102]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the various clinical analytics available at the dashboard of the combination to include prevalence and facilitate comparison across multiple cohorts as in Edwards in order to expand the type and specificity of clinically useful analytics provided to a user so that the user is more informed and improved patient outcomes and intervention investment guidance is facilitated (as suggested by Edwards [0028]). Claim 18 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 17, and the combination further teaches wherein the exploratory data analysis is configured to support the clinician in designing one or more of a prevention strategy and an intervention strategy (Amarasingham [0066]-[0068], noting system analytics are graphically rendered at an interactive dashboard for review by a clinical user (e.g. a member of a clinical intervention team), considered equivalent to facilitating “support” of care decisions). Claim 19 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 18, and the combination further teaches wherein the system is further configured to render, by a graphical dashboard, the synthesized results of the exploratory data analysis as graphical presentations enabling a clinician for interpretation and assessment of at least one health risk factor selected from one or more of the first plurality of health risk factors and the second plurality of health risk factors (Amarasingham [0066]-[0068], noting system analytics are graphically rendered at an interactive dashboard for review by a clinical user (e.g. a member of a clinical intervention team), considered equivalent to “enabling” a clinician to interpret and assess at least one health risk factor). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Amarasingham in view of Tupin, Lapointe, and Edwards as applied to claims 1 and 5 above, and further in view of Roberts. Claim 6 Amarasingham in view of Tupin, Lapointe, and Edwards teaches the system of claim 5, showing that different condition- and event-specific risk scores and overall risk score types can be calculated. However, the present combination fails to explicitly disclose wherein the system is operable to generate the overall risk score for one or more of postpartum depression and for caesarian delivery. However, Roberts teaches an analogous pregnancy-related risk prediction system that predicts a risk score for caesarian delivery (Roberts abstract). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the overall pregnancy-related risk score of the combination such that it is related to the specific condition of caesarian delivery as in Roberts because C-section deliveries are known to increase the risk of complications and providing a risk score related to a C-section would help physicians to more accurately determine whether or not to recommend C-sections to patients in labor (as suggested by Roberts [0002]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAREN A HRANEK whose telephone number is (571)272-1679. The examiner can normally be reached M-F 8:00-4:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAREN A HRANEK/ Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Oct 12, 2023
Application Filed
Jun 03, 2025
Non-Final Rejection — §101, §103, §112
Aug 30, 2025
Interview Requested
Dec 04, 2025
Response Filed
Mar 16, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month