Prosecution Insights
Last updated: April 19, 2026
Application No. 17/493,818

System And Method For Triggering Mental Healthcare Services Based On Prediction Of Critical Events

Final Rejection §101§103
Filed
Oct 04, 2021
Examiner
HIGGS, STELLA EUN
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Koa Health Digital Solutions S L U
OA Round
4 (Final)
39%
Grant Probability
At Risk
5-6
OA Rounds
3y 8m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
138 granted / 352 resolved
-12.8% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
44 currently pending
Career history
396
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 352 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is made in response to the amendments/remarks filed on December 16, 2025. Claims 1-7, 9, 11-18, and 20-21 are pending. Claims 8m 10, and 19 have been previously cancelled. Claim 19 was previously cancelled. Claims 1, 18, 20 and 21 have been amended. Claims 1, 18, 20, and 21 are independent claims. This action is made final. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the 101 rejection have been fully considered but are not persuasive. Applicant contends the claims improves other technology or technical field by reducing computational demands on the system. However, the Examiner respectfully disagrees. MPEP 2106.04(d)(1) and MPEP 2106.05(a) indicates that a practical application may be present where the claimed invention provides a technical solution to a technical problem. See, e.g., DDR Holdings, LLC. v. Hotels.com, L.P., 773 F.3d 1245, 1259 (Fed. Cir. 2014) (finding that claiming a website that retained the “look and feel” of a host webpage provided a technological solution to the problem of retention of website visitors by utilizing a website descriptor that emulated the “look and feel” of the host webpage, where the problem arose out of the internet and was thus a technical problem). Here, the Applicant’s problem is not a technological problem caused by the computer. The problem of diagnosing mental health crises is not a problem cause by the computer, but is a problem that existed and/or exists regardless of whether a computer is involved in the process. At best, Applicant’s identified problem is a healthcare management problem. Because no technological problem is present, the claims do not provide a practical application. Furthermore, insomuch as Applicant asserts the claimed method is an improvement in that it increases speed system operation and reduce computational demands on the system, the argument is not persuasive. The claimed invention is using a computer as a tool and any improvement present is an improvement to the abstract idea of predicting a mental health crisis (using less/higher quality data). Furthermore, insomuch as Applicant alleges there is an improvement to speed system operation and reduce computational demands, there is no indication that a problem exists with the operational speeds and computational demands on the system nor that the claimed invention solves this problem. As a first matter, merely stating that a less complex model may be selected so as to speed system operation and reduce computational demands is not indicative of a technical problem. Furthermore, there is nothing in the claims that would necessarily result in the model that is selected is the least computationally complex and that its selection is faster and reduces computational resources. For instance, in the case where the quality assessment is poor, the result may be that a computationally complex and slow model may be selected. Because no technological problem is present, and even if it were, the claims do not explicitly solve any purported technical problem, therefore, the claims do not provide a practical application. Applicant’s arguments with respect to the prior art rejection has been fully considered but is not persuasive. Applicant argues the prior art references fail to disclose making two distinct quality assessments of two distinct set of data, and selecting model based on both of the two distinct quality assessments. However, the examiner respectfully disagrees. Dibari is directed to a method/system for detecting a mental health condition using structured and unstructured information. Structured information including data such as electronic health records, patient attributes, biomarker data, etc. and unstructured data such as free text notes, transcripts, etc. Dibari further teaches information such as biomarker data associated with specific mental health conditions can have an associated value to determine sentiment (i.e., quality assessment) (e.g., see [0019], [0030]). Dibari further teaches various unstructured data can be analyzed and scored (i.e., second quality assessment) (e.g., see [0030], [0050], [0058], [0060]). Furthermore, while Dibari further teaches the value and scores can be aggregated to generate an overall score/sentiment, Dibari, nonetheless teaches assessing some property, characteristic, attribute (i.e., quality) of distinct data. As such, Dibari teaches individually scoring and/or assigning values to both structured and unstructured data, which are distinct, Dibari teaches the claimed limitation. Applicant further argues the prior art references fail to teach “wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d)”. However, the examiner respectfully disagrees. Higgins is directed towards selecting a particular type of machine learning model to complete different tasks. Higgins further teaches that one model, of a plurality of models, may be selected based one or more defined inputs, including whether the data is low risk (i.e., quality) (e.g., see [0033], [0036], [0075]). Furthermore, as noted above, Dibari teaches assessing an overall score/sentiment that can be made up of individual and distinct values/scores of structured and unstructured data. As such, Dibari-Higgins teaches selecting a model based at least in part on the first quality assessment and the second quality assessment [wherein the second quality assessment is distinct from the first quality assessment]”. Accordingly, it would have been obvious to modify Dibari in view of Higgins before the effective application date with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9, 11-18, and 20-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-7, 9, 11-17 recite a method of predicting a mental health crisis, which is within the statutory category of a process. Claim 18 recites a method of predicting a mental health crisis, which is within the statutory category of a process. Claim 19 recites a crisis event detection system for predicting a mental health crisis, which is within the statutory class of a machine. Claim 20 recites a method of predicting a mental health crisis, which is within the statutory category of a process. Claim 21 recites a system of predicting a mental health crisis, which is within the statutory category of a manufacture. Claims are eligible for patent protection under § 101 if they are in one of the four statutory categories and not directed to a judicial exception to patentability. Alice Corp. v. CLS Bank Int'l, 573 U.S. ___ (2014). Claims 1-7, 9, 11-18, and 20-21, each considered as a whole and as an ordered combination, are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. MPEP 2106 Step 2A – Prong 1: The limitations of: Claims 1 and 18, 20, 21 (claim 1 being representative) (a) receiving an amount of structured data, wherein the structured data includes information about each patient of a plurality of patients; (b) receiving an amount of unstructured data, wherein the unstructured data includes information about each of at least some patients of the plurality of patients and the unstructured data includes data distinct from the structured data; (c) making a first quality assessment about the data received in (a) by: making, for each patient, a patient-level quality assessment of the structured data received in (a); and from the patient-level quality assessments of the structured data, determining the first quality assessment; (d) making a second quality assessment about the data received in (b) by: making, for each patient, a patient-level quality assessment of the unstructured data received in (b), including the data distinct from the structured data; and from the patient-level quality assessments of the unstructured data, including the data distinct from the structured data, determining the second quality assessment, wherein the second quality assessment is distinct from the first quality assessment; (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d); (f) depending on the quality assessment made in step (c) and/or the quality assessment made in step (d), determining a set of at least some structured data or at least some unstructured data on which to perform feature extraction to thereby obtain results of the feature extraction; (g) supplying the results of the feature extraction obtained in (f) to the model selected in (e) so that the model makes a prediction of a mental health crisis event. as presently drafted, under the broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for the recitation of generic computer components. That is, other than reciting an output unit, the claimed invention amounts to managing personal behavior. For example, but for the noted computer elements, the claim encompasses a person following rules or instructions to receive and process data in the manner described in the abstract idea. The examiner further notes that “methods of organizing human activity” includes a person’s interaction with a computer (see October 2019 Update: Subject Matter Eligibility at Pg. 5). If the claim limitation, under its broadest reasonable interpretation, covers managing persona behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. MPEP 2106 Step 2A – Prong 2: This judicial exception is not integrated into a practical application because there are no meaningful limitations that transform the exception into a patent eligible application. The additional elements merely amount to instructions to apply the exception using generic computer components (“output unit”—all recited at a high level of generality). Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. The “crisis event detection system” is not a generic computer component; however it is recited at a high levels of generality and similarly amount to generally linking the abstract idea to a particular technological environment. (See MPEP 2106.04(d)(1) indicating generally linking an abstract idea to a particular technological environment does not amount to integrating the abstract idea into a practical application). The claim further recites the additional elements of using a model to make predictions. The use of the trained model provides nothing more than mere instructions to implement the abstract idea, supra. July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, discussion of item (c) at Pgs. 7-9. The use of the trained model provides nothing more than mere instructions to implement an abstract idea on a generic computer (“apply it”). See MPEP 2106.05(f). MPEP 2106.05(f); July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, discussion of items (d) and (e) at Pgs. 8-9. The claims only manipulate abstract data elements as part of performing the abstract idea. They do not set forth improvements to another technological field or the functioning of the computer itself and instead use computer elements as tools in a conventional way to improve the functioning of the abstract idea identified above. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. None of the additional elements recited "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)). At the levels of abstraction described above, the claims do not readily lend themselves to a finding that they are directed to a nonabstract idea. Therefore, the analysis proceeds to step 2B. See BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1349 (Fed. Cir. 2016) ("The Enfish claims, understood in light of their specific limitations, were unambiguously directed to an improvement in computer capabilities. Here, in contrast, the claims and their specific limitations do not readily lend themselves to a step-one finding that they are directed to a nonabstract idea. We therefore defer our consideration of the specific claim limitations’ narrowing effect for step two.") (citations omitted). MPEP 2106 Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons as presented in Step 2A Prong 2. Moreover, the additional elements recited are known and conventional generic computing elements (“output unit”—see [0058] describing the various components as general purpose, common, standard, known to one of ordinary skill, and at a high level of generality, and in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy the statutory disclosure requirements). Therefore, these additional elements amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept that amounts to significantly more. See MPEP 2106.05(f). The Federal Circuit has recognized that "an invocation of already-available computers that are not themselves plausibly asserted to be an advance, for use in carrying out improved mathematical calculations, amounts to a recitation of what is 'well-understood, routine, [and] conventional.'" SAP Am., Inc. v. InvestPic, LLC, 890 F.3d 1016, 1023 (Fed. Cir. 2018) (alteration in original) (citing Mayo v. Prometheus, 566 U.S. 66, 73 (2012)). Apart from the instructions to implement the abstract idea, they only serve to perform well-understood functions (e.g., receiving, translating, and displaying data—see Specification above as well as Alice Corp.; Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307 (Fed. Cir. 2016); and Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015) covering the well-known nature of these computer functions). Furthermore, as discussed above, the additional element of a “crisis event detection system” is recited at high levels of generality and were determined to generally link the abstract idea into a particular technological environment or field of use. This additional element have been re-evaluated under step 2B and have also been found insufficient to provide significantly more. (See MPEP 2106.05(A) indicating generally linking an abstract idea to a particular technological environment does not amount to significantly more). Similarly, as discussed above, using “the model makes a prediction of mental health crisis events” was recited at a high level of generality and provides nothing more than mere instructions to implement an abstract idea on a generic computer (“apply it”). This additional element has been re-evaluated under step 2B and has also been found insufficient to provide significantly more. (See MPEP 2106.05(A) indicating merely adding the words “apply it” or equivalent use does not amount to significantly more). Dependent Claims The limitations of dependent but for those addressed below merely set forth further refinements of the abstract idea without changing the analysis already presented. Claims 2-6, 9, 11 and 16 merely recite which data is utilized and when, how the quality of the data is assessed, and associating a value with a crisis event, which covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions). Claim 7 includes training the model, when given its broadest reasonable interpretation in light of the amounts to a mathematical concept that creates data associations and is an abstract idea. Claim 12 includes the additional element of utilizing a particular model. These additional elements are considered to “apply it” under both the practical application and significantly more analysis, as detailed in the analysis above. Claims 13-15 include the additional elements of the system serving web pages and outputting the results. These additional elements are considered to “generally link” the abstract idea to a particular technology and does not provide a practical application or amounts to significantly more for the same reasons detailed above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9, 11, 13-18, and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dibari et al (USPPN: 2021/0090733; hereinafter Dibari) and in further view of Higgins et al. (USPPN: 2021/0065053; hereinafter Higgins) As to claim 1, Dibari teaches A method (e.g., see Abstract) comprising: (a) receiving an amount of structured data into a crisis event detection system, wherein the structured data includes information about each patient of a plurality of patients (e.g., see [0015], [0019], [0043], [0066] teaching receiving data of a patient, including structured data such as patient case files, electronic health records, patient attributes, biomarker data, etc.)); (b) receiving an amount of unstructured data into the crisis event detection system, wherein the unstructured data includes information about each of at least some patients of the plurality of patients and the unstructured data includes data distinct from the structured data (e.g., [0015], [0043], [0066] teaching receiving data of a patient including unstructured data such as free text notes, transcripts, medical literature, etc. Structured information including information in a database and unstructured information include free text or other forms not in a database format (i.e., distinct from one another)); (c) making a first quality assessment about the data received in (a) by making, for each patient, a patient-level quality assessment of the structured data received in (a); and from the patient-level quality assessments of the structured data, determining a first quality assessment (e.g., see Fig. 3, [0030], [0031], [0043] wherein data such as specific biomarkers may be associated with specific values that are incorporated into sentiment determination (i.e., first quality assessment)); (d) making a second quality assessment about the data received in (b) by making, for each patient, a patient-level quality assessment of the unstructured data received in (b), including the data distinct from the structured data; and from the patient-level quality assessments of the unstructured data, determining the second quality assessment, wherein the second quality assessment is distinct from the first quality assessment (e.g., see Fig. 3, [0030], [0043], [0050], [0058], [0066] teaching scoring various words, phrases and other information (i.e., patient-level quality assessment) can be individually scored, which is distinct from the biomarker value); (g) supplying the results of the feature extraction obtained in (f) to the model selected in (e) so that the model makes a prediction of a mental health crisis event, and wherein (a) through (g) are performed by the crisis event detection system (e.g., see [0029]-[0033], [0044], [0063], [0072] wherein the extracted data is used to determine various mental health conditions, including making patient predictions). While Dibari teaches using machine learning models to make the prediction and further teaches performing feature extraction of the structured/unstructured data (e.g., see [0027], [0028] teaching feature extraction of the patient data, including the structured and/or unstructured data), Dibari fails to teach (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d); (f) depending on the first quality assessment made in step (c) and/or the second quality assessment made in step (d), determining a set of at least some structured data or at least some unstructured data on which the perform feature extraction to thereby obtain results of the feature extraction. However, in the same field of endeavor of automating data processing using machine learning models, Higgins teaches (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d) (e.g., see [0033], [0036], [0039], [0041], [0042], [0074], [0075] of Higgins teaching selecting a model based on one or more inputs, including the quality thereof, to make a prediction); (f) depending on the first quality assessment made in step (c) and/or the second quality assessment made in step (d), determining a set of at least some structured data or at least some unstructured data on which the perform feature extraction to thereby obtain results of the feature extraction (e.g., see [0022], [0027]-[0030] wherein based on the quality of the dataset, a subset of training data is automatically selected);. Accordingly, it would have been obvious to modify Dibari in view of Higgins before the effective application date with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 2, the rejection of claim 1 is incorporated. Dibari further teaches wherein the at least some structured data or some unstructured data of (f) includes some of the amount of structured data received in (a) (e.g., see [0027] wherein the feature extraction is performed on the data, which can include structured and/or unstructured data). As to claim 3, the rejection of claim 1 is incorporated. Dibari further teaches wherein the at least some structured data or some unstructured data of (f) includes some of the amount of unstructured data received in (b) (e.g., see [0027] wherein the feature extraction is performed on the data, which can include structured and/or unstructured data). As to claim 4, the rejection of claim 1 is incorporated. Dibari further teaches wherein the at least some structured data or some unstructured data of (f) includes none of the structured data received in (a) and none of the unstructured data received in (b) (e.g., see [0027] wherein the feature extraction is performed on the data, which can include structured and/or unstructured data, wherein it would have been obvious that the data used for feature extraction can include both or either). As to claim 5, the rejection of claim 1 is incorporated. Dibari-Higgins further teaches wherein at least some of the feature extraction of (f) occurs prior to the selecting of the model in (e) (e.g., see rejection above wherein Dibari teaches feature extraction of data to be utilized in a model and Higgins teaches the selection of a model. Notable, so long as both the features are extracted and the model is selected prior to the features being inputted into the model, the timing of when the feature extraction occurs and when the model is selected are not dependent on one another and the timing of which comes before the other is merely a matter of design preference and/or obvious to try based on a finite number of possible solutions (i.e., feature extraction before model selection, model selection before feature extraction, or both simultaneously). As such, it would have at least been obvious to try as there are a finite number of identified, predictable solutions, with a reasonable expectation of success (e.g., see MPEP 2143)). As to claim 6, the rejection of claim 1 is incorporated. Dibari-Higgins further teaches wherein none of the feature extraction of (f) occurs prior to the selecting of the model in (e) e.g., see rejection above wherein Dibari teaches feature extraction of data to be utilized in a model and Higgins teaches the selection of a model. Notably, so long as both the features are extracted and the model is selected prior to the features being inputted into the model, the timing of when the feature extraction occurs and when the model is selected are not dependent on one another and the timing of which comes before the other is merely a matter of design preference and/or obvious to try based on a finite number of possible solutions (i.e., feature extraction before model selection, model selection before feature extraction, or both simultaneously). As such, it would have at least been obvious to try as there are a finite number of identified, predictable solutions, with a reasonable expectation of success (e.g., see MPEP 2143)). As to claim 7, the rejection of claim 1 is incorporated. Dibari fails to teach wherein the plurality of selectable models includes a first model and a second model, wherein the first model is a machine learning model that requires training, and wherein the second model is a rule-based model that does not require training, wherein the model selected in (e) is the first model, the method further comprising: (h) using feature extraction results to train the model selected in (e). However, in the same field of endeavor of using machine learning models Higgins teaches wherein the plurality of selectable models includes a first model and a second model, wherein the first model is a machine learning model that requires training, and wherein the second model is a rule-based model that does not require training, wherein the model selected in (e) is the first model, the method further comprising: (h) using feature extraction results to train the model selected in (e) (e.g., see [0002], [0033], [0041], [0074], [0075] of Higgins teaching selecting a model based on one or more inputs, including the quality thereof, to make a prediction, wherein the models can include rule-based machine learning or those that require training). Accordingly, it would have been obvious to modify Dibari in view of Higgins with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 9, the rejection of claim 1 is incorporated. While Dibari teaches structure data and patient data, Dibari fails to teach wherein the first quality assessment of the data received in (a) is determined based upon a percentage of the data that is missing. However, in the same field of endeavor of using machine learning models Higgins teaches wherein the first quality assessment of the data received in (a) is determined based upon a percentage of the data that is missing (e.g., see [0041] wherein models are selected based on a percentage of missing data). Accordingly, it would have been obvious to modify Dibari in view of Higgins with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 11, the rejection of claim 1 is incorporated. While Dibari teaches unstructured patient data including free form notes, Dibari fails to teach wherein the quality assessment of the unstructured data received in (b) for a patient is determined based on a measure of an amount of notes received into the system for the patient. However, in the same field of endeavor of using machine learning models Higgins teaches wherein the second quality assessment of the unstructured data received in (b) for a patient is determined based on a measure of an amount of notes received into the system for the patient (e.g., [0041], wherein models are selected based on whether the input data is missing (i.e., amount of notes)). Accordingly, it would have been obvious to modify Dibari in view of Higgins with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 13, the rejection of claim 1 is incorporated. Dibari further teaches wherein the crisis event detection system serves web pages usable to supply both the structured data received in (a) and the unstructured data received in (b) into to the crisis event detection system (e.g., see [0074] wherein the system can be utilized with any type of computing environment and software, including cloud/network computing and browser software). As to claim 14, the rejection of claim 1 is incorporated. Dibari further teaches further comprising: (h) outputting from the crisis event detection system an alert, wherein the alert indicative of the prediction made in (g) of the mental health crisis event (e.g., see [0033] wherein an alert is provided indicating a predicted mental health event). As to claim 15, the rejection of claim 1 is incorporated. Dibari further teaches further comprising: (h) outputting from the crisis event detection system an electronic communication, wherein the electronic communication conveys an alert, wherein the alert is indicative of the prediction made in (g) of the mental health crisis event (e.g., see [0033] wherein an alert is transmitted to notify appropriate personnel indicating a predicted mental health event). As to claim 16, the rejection of claim 1 is incorporated. Dibari further teaches wherein the feature extraction of (f) comprises generating a plurality of records, wherein each record includes a set of informational elements and a corresponding set of informational values, wherein one of the informational elements is a crisis event informational element, and wherein the informational value corresponding to the crisis event informational element indicates whether a mental health crises event occurred (e.g., see [0046]-[0047], [0050]-[0052] wherein words are associated with a positive, neutral, or negative value indicating a possible mental health event). As to claim 17, the rejection of claim 1 is incorporated. Dibari further teaches wherein the feature extraction of (f) comprises generating a plurality of strings of multi-dimensional vector values (e.g., see [0028]-[0029] wherein the feature extraction includes various phrases and/or groups of words (i.e. strings of multi-dimensional vector values). As to claim 18, Dibari teaches A method (e.g., see Abstract) comprising: (a) receiving an amount of structured data into a crisis event detection system, wherein the structured data includes information about each patient of a plurality of patients (e.g., see [0015], [0043] teaching receiving data of a patient, including structured data such as patient case files, electronic health records, patient attributes, etc.)); (b) performing feature extraction on the data received in (a) thereby obtaining a plurality of records of structured data (e.g., see [0027], [0028] teaching feature extraction of the patient data, including the structured and/or unstructured data); (c) receiving an amount of unstructured data into the crisis event detection system, wherein the unstructured data includes information about each of at least some patients of the plurality of patients (e.g., [0015], [0043] teaching receiving data of a patient including unstructured data such as free text notes, transcripts, medical literature, etc.); (d) performing feature extraction on the data received in (c) thereby obtaining a plurality of strings of vector values (e.g., see [0027], [0028] teaching feature extraction of the patient data, including the structured and/or unstructured data); (e) making a first quality assessment about the data received in (a) by making, for each patient, a patient-level quality assessment of the structured data received in (a); and from the patient-level quality assessments of the structured data, determining a first quality assessment (e.g., see Fig. 3, [0030], [0043], [0050], [0066] teaching scoring various words, phrases and other information (i.e., patient-level quality assessment) to determine a sentiment analysis (i.e., first quality assessment), wherein the analysis is performed on structured and unstructured data); (f) making a second quality assessment about the data received in (c) by making, for each patient, a patient-level quality assessment of the unstructured data received in (c); and from the patient-level quality assessments of the unstructured data, determining the second quality assessment (e.g., see Fig. 3, [0030], [0043], [0050], [0066] teaching scoring various words, phrases and other information (i.e., patient-level quality assessment) to determine a sentiment analysis (i.e., first quality assessment), wherein the analysis is performed on structured and unstructured data); (h) supplying both structured data as well as unstructured data to the model selected in (g) so that the selected model outputs a prediction of a mental health crisis event (e.g., see [0029]-[0033], [0044], [0063], [0072] wherein the extracted data is used to determine various mental health conditions using machine learning models including making patient predictions). While Dibari teaches using machine learning models to make the prediction Dibari fails to teach (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d); (f) using the model selected in (e). However, in the same field of endeavor of automating data processing using machine learning models, Higgins teaches (wherein a first model of the selectable models is a trained model, wherein a second of the models is a rule-based model that that is not a trained model, wherein the selection of (g) is based at least in part on the first quality assessment made in step (e) and on the second quality assessment made in step (f) (e.g., see [0002], [0033], [0041], [0074], [0075] of Higgins teaching selecting a model based on one or more inputs, including the quality thereof, to make a prediction, wherein the models can include rule-based machine learning or those that require training). Accordingly, it would have been obvious to modify Dibari in view of Higgins with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 20, Dibari teaches A method (e.g., see Abstract) comprising: (a) receiving an amount of structured data into a crisis event detection system, wherein the structured data includes information about each patient of a plurality of patients (e.g., see [0015], [0043] teaching receiving data of a patient, including structured data such as patient case files, electronic health records, patient attributes, etc.)); (b) receiving an amount of unstructured data into the crisis event detection system, wherein the unstructured data includes information about each of at least some patients of the plurality of patients (e.g., [0015], [0043] teaching receiving data of a patient including unstructured data such as free text notes, transcripts, medical literature, etc.); (c) making a first quality assessment about the data received in (a) by making, for each patient, a patient-level quality assessment of the structured data received in (a); and from the patient-level quality assessments of the structured data, determining a first quality assessment (e.g., see Fig. 3, [0030], [0043], [0050], [0066] teaching scoring various words, phrases and other information (i.e., patient-level quality assessment) to determine a sentiment analysis (i.e., first quality assessment), wherein the analysis is performed on structured and unstructured data); (d) making a second quality assessment about the data received in (b) by making, for each patient, a patient-level quality assessment of the unstructured data received in (b); and from the patient-level quality assessments of the unstructured data, determining the second quality assessment (e.g., see Fig. 3, [0030], [0043], [0050], [0066] teaching scoring various words, phrases and other information (i.e., patient-level quality assessment) to determine a sentiment analysis (i.e., first quality assessment), wherein the analysis is performed on structured and unstructured data); (f) using [a] model to make a prediction of a mental health crises event, and wherein (a) through (f) are performed by the crises event detection system (e.g., see [0029]-[0033], [0044], [0063], [0069], [0072] wherein the extracted data is used to determine various mental health conditions, including predicting patient health, using a machine learning model). While Dibari teaches using machine learning models to make the prediction Dibari fails to teach (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d); (f) using the model selected in (e). However, in the same field of endeavor of automating data processing using machine learning models, Higgins teaches (e) selecting one model of a plurality of selectable models, wherein the selection of (e) is based at least in part on the first quality assessment made in step (c) and on the second quality assessment made in step (d); (f) using the model selected in (e) (e.g., see [0033], [0041], [0074], [0075] of Higgins teaching selecting a model based on one or more inputs, including the quality thereof, to make a prediction). Accordingly, it would have been obvious to modify Dibari in view of Higgins with a reasonable expectation of success. One would have been motivated to make the modification in order to easily and efficiently select one or more artificial network algorithms and the like out of the thousands of available algorithms, thereby reducing excessive utilization of computing resources (e.g., see [0011] of Higgins). As to claim 21, the claim is directed to a system implementing the method of claim 1 and is similarly rejected. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dibari and Higgins, as applied above, and in further view of Ramsl (USPPN: 2023/0096118; hereinafter Ramsl) As to claim 12, the rejection of claim 10 is incorporated. While Pednault teaches determining a quality assessment of data and Dibari teaches unstructured data, Dibari-Higgins fail to teach wherein the quality assessment of the data received in (b) for a patient is determined using a Latent Dirichlet Allocation (LDA) model based topic coherence labels. Notably, the type of model used is interpreted as being an intended use. Applicant is remined that, typically, no patentable distinction is made by an intended use or result unless some structural difference is imposed by the use or result on the structure or material recited in the claim, or some manipulative difference is imposed by the use or result on the action recited in the claim. An intended use generally does not impart a patentable distinction if it merely states an intention or is a description of how the claimed apparatus is to be used (See MPEP 2111.05). Nonetheless, for the purpose of compact prosecution and in the same field of endeavor of assessing quality of data for use in machine learning models, Ramsl teaches wherein the quality assessment of the data received in (b) is determined using a Latent Dirichlet Allocation (LDA) model based topic coherence labels (e.g., see [0084] teaching the use of topic learning models such as Latent Dirichlet Allocation to determine a quality score for received data). Accordingly, it would have been obvious to modify Dibari-Higgins in view of Ramsl with a reasonable expectation of success. One would have been motivated to make such a modification as a simple substitution of one known type of model to determine quality of data for another with predictable results such as identifying whether input data fulfills its intended purpose (See KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007); and MPEP 2143 and [0084] of Ramsl). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F.3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir. 2005); Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STELLA HIGGS whose telephone number is (571)270-5891. The examiner can normally be reached Monday-Friday: 9-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Choi can be reached on (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STELLA HIGGS/ Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Oct 04, 2021
Application Filed
May 30, 2024
Non-Final Rejection — §101, §103
Sep 06, 2024
Response Filed
Dec 04, 2024
Final Rejection — §101, §103
Feb 10, 2025
Response after Non-Final Action
Feb 27, 2025
Request for Continued Examination
Feb 28, 2025
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §101, §103
Dec 16, 2025
Response Filed
Feb 25, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488881
SYSTEM METHOD AND NETWORK FOR EVALUATING THE PROGRESS OF A MANAGED CARE ORGANIZATION PATIENT WELLNESS GOALS
2y 5m to grant Granted Dec 02, 2025
Patent 12367987
TECHNOLOGIES FOR MANAGING CAREGIVER CALL REQUESTS VIA SHORT MESSAGE SERVICE
2y 5m to grant Granted Jul 22, 2025
Patent 12341851
SYSTEMS, METHODS, AND SOFTWARE FOR ACCESSING AND DISPLAYING DATA FROM IMPLANTED MEDICAL DEVICES
2y 5m to grant Granted Jun 24, 2025
Patent 12327642
SYSTEM AND METHOD FOR PROVIDING TELEHEALTH SERVICES USING TOUCHLESS VITALS AND AI-OPTIMIZED ASSESSMENT IN REAL-TIME
2y 5m to grant Granted Jun 10, 2025
Patent 12237089
ONLINE MONITORING OF CLINICAL DATA DRIFTS
2y 5m to grant Granted Feb 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
39%
Grant Probability
73%
With Interview (+34.1%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 352 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month