DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This action is in reply to the reply received October 1, 2025 (hereinafter “Reply”).
Claims 1, 2, 4-7, and 12-16 are amended.
Claims 3, 8, 10, 11 are cancelled.
Claims 17-19 are new.
Claims 1, 2, 4-7, 9, and 12-19 are pending.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. However, applicant cannot rely upon the certified copy of the foreign priority application to overcome prior art rejections because a translation of said application has not been made of record in accordance with 37 C.F.R. § 1.55. See M.P.E.P. §§ 215 and 216.
Claim Rejections - 35 U.S.C. § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 17-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 17 and 18 recite setting weights for characteristics and performing operations using these weights. However, the specification does not disclose or provide support for these features or even appear to mention the term “weight” aside from its use to refer to a patient’s physical characteristic (i.e., to describe the amount of mass a person has/the force a person exerts on the Earth). Accordingly, the specification fails to provide an adequate disclosure of these claim features to satisfy the written description requirements of pre-AIA 35 U.S.C. § 112, first paragraph.
Claims 19 recites calculating a cosine similarity. However, the specification does not disclose or provide support for this feature or even appear to mention the term “cosine” at all. Accordingly, the specification fails to provide an adequate disclosure of these claim features to satisfy the written description requirements of pre-AIA 35 U.S.C. § 112, first paragraph.
Claim 18 is rejected for incorporating the deficiencies of the rejected claim on which it depends.
Claim Rejections - 35 U.S.C. § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4-7, 9, and 12-19 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-16 are directed to an abstract idea without significantly more as required by the Alice test as discussed below.
Step 1
Claims 1, 2, 4-7, 9, and 12-19 are directed to a process, machine, manufacture, or composition of matter.
Step 2A
Claims 1, 2, 4-7, 9, and 12-19 are directed to abstract ideas, as explained below.
Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea; and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity.
The claims recite the following limitations that are directed to abstract ideas. Claim 1 recites identify information of a disease relating to a patient, determine an evaluation parameter for evaluating the disease related to the patient, acquire chief complaint data of the patient and medical knowledge relating to the disease, determine a first range indicating an ambiguity of characteristic contained in the chief complaint data with respect to the evaluation parameter, determine a second range indicating an ambiguity of characteristic contained in the medical knowledge with respect to the evaluation parameter, map the first range and the second range on a coordinate space based on the evaluation parameter, and calculate a degree of match between the chief complaint data and the medical knowledge based on an area in where the first range and the second range overlap each other. Claims 15 and 16 recite similar features as claim 1. Claims 2, 4-7, 9, 12-14, and 17-19 further specify features of algorithms identified as being directed toward abstract ideas or characteristics of the data used thereby.
These limitations describe abstract ideas that correspond to concepts identified as abstract ideas by the courts as mathematical concepts—such as mathematical relationships, mathematical formulas or equations, and mathematical calculations—because the claimed features for developing and mapping data ranges and calculating results based thereon are mathematical relationships, mathematical formulas or equations, and mathematical calculations.
These limitations describe abstract ideas that correspond to concepts identified as abstract ideas by the courts as mental processes—such as concepts performed in the human mind (including an observation, evaluation, judgment, or opinion)—because the claimed features identified above are concepts performed in the human mind (including an observation, evaluation, judgment, or opinion).
These limitations describe abstract ideas that correspond to concepts identified as abstract ideas by the courts as certain methods of organizing human activity—such as fundamental economic principles or practices (including hedging, insurance, mitigating risk), commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)—because the claim features identified above manage personal behavior or relationships or interactions between people including following rules or instructions.
Thus, the concepts set forth in claims 1, 2, 4-7, 9, and 12-19 recite abstract ideas.
Prong two of the Step 2A requires identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Further, “integration into a practical application” uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application, such as considerations discussed in M.P.E.P. § 2106.05(a)-(h).
The claims recite the following additional elements beyond those identified above as being directed to an abstract idea. Claim 1 recites processing circuitry and causing display of the degree of match Claim 15 recites causing display of the degree of match. Claim 16 recites a type of non-transitory storage medium, a computer, and causing display of the degree of match. Several of the dependent claims recite features for displaying information calculated in the claims.
The identified judicial exception(s) are not integrated into a practical application for the following reasons.
First, evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. The additional computer elements identified above—the processing circuitry, non-transitory storage medium, and computer—are recited at a high level of generality. Inclusion of these elements amounts to mere instructions to implement the identified abstract ideas on a computer. See M.P.E.P. § 2106.05(f). The use of conventional computer elements to display information is the insignificant, extra-solution activity of mere data gathering or outputting in conjunction with a law of nature or abstract idea. See M.P.E.P. § 2106.05(g). To the extent that the claims transform data, the mere manipulation of data is not a transformation. See M.P.E.P. § 2106.05(c). Inclusion of computing system in the claims amounts to generally linking the use of the judicial exception to a particular technological environment or field of use. See M.P.E.P. § 2106.05(h). Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception.
Second, evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. See M.P.E.P. § 2106.05(a). Their collective functions merely provide an implementation of the identified abstract ideas on a computer system in the general field of use of medical information management, processing, and display. See M.P.E.P. § 2106.05(h).
Thus, claims 1, 2, 4-7, 9, and 12-19 recite mathematical concepts, mental processes, or certain methods of organizing human activity without including additional elements that integrate the exception into a practical application of the exception.
Accordingly, claims 1, 2, 4-7, 9, and 12-19 are directed to abstract ideas.
Step 2B
Claims 1, 2, 5-7, 9, and 12-19 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea.
The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination. Additional features of these analyses are discussed below.
Evaluated individually, the additional elements do not amount to significantly more than a judicial exception. In addition to the factors discussed regarding Step 2A, prong two, these additional computer elements also provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. The use of generic computer components to display information is the well-understood, routine, and conventional computer functions of receiving or transmitting data over a network, e.g., the Internet, and does not impose any meaningful limit on the computer implementation of the identified abstract ideas. See M.P.E.P. § 2106.05(d)(II). Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception.
Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to mere instructions to implement the identified abstract ideas on a computer.
Thus, claims 1, 2, 4-7, 9, and 12-19, taken individually and as an ordered combination of elements, are not directed to eligible subject matter since they are directed to an abstract idea without significantly more.
Claim Rejections - 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4-7, 9, and 12-18 are rejected under AIA 35 U.S.C. § 103 as being unpatentable over Allen et al. (U.S. Pub. No. 2021/0343415 A1) (hereinafter “Allen”) in view of Lefkofsky et al. (U.S. Pub. No. 2021/0125731 A1) (hereinafter “Lefkofsky”).
Claims 1, 15, and 16: Allen, as shown, discloses the following limitations:
processing circuitry (see at least ¶ [0033]: an engine may be, but is not limited to, software, hardware and/or firmware or any combination thereof that performs the specified functions including, but not limited to, any use of a general and/or specialized processor in combination with appropriate software loaded or stored in a machine readable memory and executed by the processor; see also at least ¶¶ [0034]-[0041]) configured to
identify information of a disease relating to a patient (see at least ¶ [0113]: As shown in FIG. 3, in accordance with one illustrative embodiment, a patient 302 presents symptoms 304 of a medical malady or condition to a user 306, such as a healthcare practitioner, technician, or the like. The user 306 may interact with the patient 302 via a question 314 and response 316 exchange where the user gathers more information about the patient 302, the symptoms 304, and the medical malady or condition of the patient 302—i.e., information of a disease. It should be appreciated that the questions/responses may in fact also represent the user 306 gathering information from the patient 302 using various medical equipment, e.g., blood pressure monitors, thermometers, wearable health and activity monitoring devices associated with the patient such as a FitBit™, a wearable heart monitor, or any other medical equipment that may monitor one or more medical characteristics of the patient 302. In some cases such medical equipment may be medical equipment typically used in hospitals or medical centers to monitor vital signs and medical conditions of patients that are present in hospital beds for observation or medical treatment; see also at least ¶ [0114]: in response, the user 302 submits a request 308 to the healthcare cognitive system 300, such as via a user interface on a client computing device that is configured to allow users to submit requests to the healthcare cognitive system 300 in a format that the healthcare cognitive system 300 can parse and process. The request 308 may include, or be accompanied with, information identifying patient attributes 318—i.e., also information of a disease. These patient attributes 318 may include, for example, an identifier of the patient 302 from which patient EMRs 322 for the patient may be retrieved, demographic information about the patient, the symptoms 304, and other pertinent information obtained from the responses 316 to the questions 314 or information obtained from medical equipment used to monitor or gather data about the condition of the patient 302. Any information about the patient 302 that may be relevant to a cognitive evaluation of the patient by the healthcare cognitive system 300 may be included in the request 308 and/or patient attributes 318; see also at least ¶ [0118]: in some cases, such treatment guidance data 324 may be provided in the form of rules that indicate the criteria required to be present, and/or required not to be present, for the corresponding treatment to be applicable to a particular patient for treating a particular symptom or medical malady/condition. For example, the treatment guidance data 324 may comprise a treatment recommendation rule that indicates that for a treatment of Decitabine, strict criteria for the use of such a treatment is that the patient 302 is less than or equal to 60 years of age, has acute myeloid leukemia (AML), and no evidence of cardiac disease; see also at least ¶ [0129]),
determine an evaluation parameter for evaluating the disease related to the patient (see at least ¶ [0120]: the healthcare cognitive system 300 is augmented to include an ingestion engine 340 that operates to ingest information from the corpus or corpora 322-326, identify any ambiguous portions of content present in the content of the corpus or corpora 322-326 and disambiguate the ambiguous portions of content based on analysis of the surrounding context. In particular, in one illustrative embodiment, the treatment guidance data 324 and other medical corpus and source data 326 may provide guidelines which may be processed to train the disambiguation engine 350 with regard to contexts and their associated context based ambiguous content interpretation rules, in a manner such as previously described above. The contexts and their associated sets of context based ambiguous content interpretation rules—i.e., evaluation parameters—may be stored in the storage 357 of disambiguation engine 350; see also at least ¶ [0122]: many times the patient EMR may have notations or portions of content whose meaning may be ambiguous to the healthcare cognitive system 300 since the meaning is not made explicit in the notation or portion of content itself. As part of an ingestion operation, or in response to a runtime request that initiates processing of a patient EMR 360, these ambiguous notations or portions of content may be identified by the ambiguous content detection logic 344 of the ingestion logic 342. The identified ambiguous content is flagged and provided to the context analysis logic 352 of the disambiguation engine 350 which determines the context—i.e., also evaluation parameters—surrounding the ambiguous content, e.g., ambiguous notation in the patient EMR 322. For example, if the patient's EMR 360 has a notation of “2×4”, this notation is flagged by the ambiguous content detection logic 344 as part of the parsing and natural language processing performed by the ingestion logic 342. The flagged ambiguous notation is identified to the context analysis logic 352 which analyzes the metadata associated with the section of the patient EMR where the ambiguous notation was identified, the key words/phrases in surrounding text, and possibly even correlating the entry in the patient EMR 360 with information from other sources 326, e.g., medical insurance claims information having similar date/time information as the entry in the patient EMR 360, pharmacy prescription fulfillment information, etc.; see also at least ¶¶ [0117]-[0118] and [0123]-[0125]),
acquire chief complaint data of the patient and medical knowledge relating to the disease (see at least ¶ [0113]-[0114] and the analysis above; see also at least ¶ [0115]: healthcare cognitive system 300 provides a cognitive system that is specifically configured to perform an implementation specific healthcare oriented cognitive operation. In the depicted example, this healthcare oriented cognitive operation is directed to providing a treatment recommendation 328 to the user 306 to assist the user 306 in treating the patient 302 based on their reported symptoms 304 and other information gathered about the patient 302 via the question 314 and response 316 process and/or medical equipment monitoring/data gathering. The healthcare cognitive system 300 operates on the request 308 and patient attributes 318 utilizing information gathered from the medical corpus and other source data 326, treatment guidance data 324, and the patient EMRs 322 associated with the patient 302 to generate one or more treatment recommendation 328. The treatment recommendations 328 may be presented in a ranked ordering with associated supporting evidence, obtained from the patient attributes 318 and data sources 322-326, indicating the reasoning as to why the treatment recommendation 328 is being provided and why it is ranked in the manner that it is ranked; see also at least ¶ [0116]),
determine a first range indicating an ambiguity of characteristic contained in the chief complaint data with respect to the evaluation parameter (see at least ¶ [0120]; see also at least ¶ [0122]: many times the patient EMR may have notations or portions of content whose meaning may be ambiguous to the healthcare cognitive system 300 since the meaning is not made explicit in the notation or portion of content itself. As part of an ingestion operation, or in response to a runtime request that initiates processing of a patient EMR 360, these ambiguous notations or portions of content may be identified by the ambiguous content detection logic 344 of the ingestion logic 342. The identified ambiguous content is flagged and provided to the context analysis logic 352 of the disambiguation engine 350 which determines the context—i.e., also evaluation parameters—surrounding the ambiguous content, e.g., ambiguous notation in the patient EMR 322. For example, if the patient's EMR 360 has a notation of “2×4”, this notation is flagged by the ambiguous content detection logic 344 as part of the parsing and natural language processing performed by the ingestion logic 342. The flagged ambiguous notation is identified to the context analysis logic 352 which analyzes the metadata associated with the section of the patient EMR where the ambiguous notation was identified, the key words/phrases in surrounding text, and possibly even correlating the entry in the patient EMR 360 with information from other sources 326, e.g., medical insurance claims information having similar date/time information as the entry in the patient EMR 360, pharmacy prescription fulfillment information, etc.; see also at least ¶¶ [0123]-[0125]),
determine a second range indicating an ambiguity of characteristic contained in the medical knowledge with respect to the evaluation parameter (see at least ¶¶ [0120] and [0122]-[0125] and the analysis above), and
map the first range and the second range on a […] space based on the evaluation parameter (see at least ¶ [0119]: data mining processes may be employed to mine the data in sources 322 and 326 to identify evidential data supporting and/or refuting the applicability of the candidate treatments to the particular patient 302 as characterized by the patient’s patient attributes 318 and EMRs 322. For example, for each of the criteria of the treatment rule, the results of the data mining provides a set of evidence that supports giving the treatment in the cases where the criterion is “MET” and in cases where the criterion is “NOT MET.” The healthcare cognitive system 300 processes the evidence in accordance with various cognitive logic algorithms to generate a confidence score for each candidate treatment recommendation indicating a confidence that the corresponding candidate treatment recommendation is valid for the patient 302—i.e., ranges. The candidate treatment recommendations may then be ranked according to their confidence scores and presented to the user 306 as a ranked listing of treatment recommendations 328. In some cases, only a highest ranked, or final answer, is returned as the treatment recommendation 328. The treatment recommendation 328 may be presented to the user 306 in a manner that the underlying evidence evaluated by the healthcare cognitive system 300 may be accessible, such as via a drilldown interface, so that the user 306 may identify the reasons why the treatment recommendation 328 is being provided by the healthcare cognitive system 300; see also at least ¶ [0121]: in response to a patient 302 interfacing with the user 306, e.g., a doctor or other medical professional, the user may request decision support from the healthcare cognitive system 300, e.g., a request to generate the most appropriate medical treatment for the medical condition of the patient 302. In response, the healthcare cognitive system 300 may analyze the patient EMR 322 for the patient 302 to gather information about the patient 302 which assists in providing the requested decision support; see also at least ¶¶ [0115]-[0118]),
calculate a degree of match between the chief complaint data and the medical knowledge based on an area in where the first range and the second range overlap each other (see at least ¶ [0046]; see also at least ¶ [0079]: the most probable answers are output as a ranked listing of candidate answers ranked according to their relative scores or confidence measures calculated during evaluation of the candidate answers, as a single final answer having a highest ranking score or confidence measure, or which is a best match to the input question, or a combination of ranked listing and final answer; see also at least ¶ [0112]: the interactions 304, 314, 316, and 330 between the patient 302 and the user 306 may be performed orally, e.g., a doctor interviewing a patient, and may involve the use of one or more medical instruments, monitoring devices, or the like, to collect information that may be input to the healthcare cognitive system 300 as patient attributes 318; see also at least ¶¶ [0121], [0123]-[0125], [0139], and [0147]), and
cause a display to display the degree of match between the chief complaint data and the medical knowledge (see at least ¶ [0139]: from the ranked listing of candidate answers, at stage 480, a final answer and confidence score, or final set of candidate answers and confidence scores, are generated and output to the submitter of the original input question via a graphical user interface or other mechanism for outputting information; see also at least ¶¶ [0046], [0079], [0112], and [0147]).
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches that the space based on the evaluation parameter is a coordinate space (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the information processing and display techniques taught by Lefkofsky with the medical management systems disclosed by Allen, because Lefkofsky teaches at ¶ [0006] that its techniques “facilitate[] the discovery of insights of therapeutic significance, through the automated analysis of patterns occurring in patient clinical, molecular, phenotypic, and response data, and enabling further exploration via a fully integrated, reactive user interface.” See also Lefkofsky at ¶ [0083]. See also M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the information processing and display techniques taught by Lefkofsky with the medical management systems disclosed by Allen, because the claimed invention is merely a combination of old elements (the information processing and display techniques taught by Lefkofsky and the medical management systems disclosed by Allen), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Claim 2: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the evaluation parameter is a parameter that contains ambiguity (see at least ¶ [0120]; see also at least ¶ [0122]: many times the patient EMR may have notations or portions of content whose meaning may be ambiguous to the healthcare cognitive system 300 since the meaning is not made explicit in the notation or portion of content itself. As part of an ingestion operation, or in response to a runtime request that initiates processing of a patient EMR 360, these ambiguous notations or portions of content may be identified by the ambiguous content detection logic 344 of the ingestion logic 342. The identified ambiguous content is flagged and provided to the context analysis logic 352 of the disambiguation engine 350 which determines the context—i.e., also evaluation parameters—surrounding the ambiguous content, e.g., ambiguous notation in the patient EMR 322. For example, if the patient's EMR 360 has a notation of “2×4”, this notation is flagged by the ambiguous content detection logic 344 as part of the parsing and natural language processing performed by the ingestion logic 342. The flagged ambiguous notation is identified to the context analysis logic 352 which analyzes the metadata associated with the section of the patient EMR where the ambiguous notation was identified, the key words/phrases in surrounding text, and possibly even correlating the entry in the patient EMR 360 with information from other sources 326, e.g., medical insurance claims information having similar date/time information as the entry in the patient EMR 360, pharmacy prescription fulfillment information, etc.; see also at least ¶¶ [0123]-[0125]).
Claim 4: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to
determine the first range on the […] space based on the evaluation parameter based on evaluation relating to the characteristic included in the chief complaint data (see at least ¶ [0119]: data mining processes may be employed to mine the data in sources 322 and 326 to identify evidential data supporting and/or refuting the applicability of the candidate treatments to the particular patient 302 as characterized by the patient's patient attributes 318 and EMRs 322. For example, for each of the criteria of the treatment rule, the results of the data mining provides a set of evidence that supports giving the treatment in the cases where the criterion is “MET” and in cases where the criterion is “NOT MET.” The healthcare cognitive system 300 processes the evidence in accordance with various cognitive logic algorithms to generate a confidence score for each candidate treatment recommendation indicating a confidence that the corresponding candidate treatment recommendation is valid for the patient 302—i.e., ranges. The candidate treatment recommendations may then be ranked according to their confidence scores and presented to the user 306 as a ranked listing of treatment recommendations 328. In some cases, only a highest ranked, or final answer, is returned as the treatment recommendation 328. The treatment recommendation 328 may be presented to the user 306 in a manner that the underlying evidence evaluated by the healthcare cognitive system 300 may be accessible, such as via a drilldown interface, so that the user 306 may identify the reasons why the treatment recommendation 328 is being provided by the healthcare cognitive system 300; see also at least ¶ [0121]: in response to a patient 302 interfacing with the user 306, e.g., a doctor or other medical professional, the user may request decision support from the healthcare cognitive system 300, e.g., a request to generate the most appropriate medical treatment for the medical condition of the patient 302. In response, the healthcare cognitive system 300 may analyze the patient EMR 322 for the patient 302 to gather information about the patient 302 which assists in providing the requested decision support; see also at least ¶¶ [0020], [0047], and [0115]-[0116]), and
determine the second range on the […] space based on the evaluation parameter based on evaluation relating to the characteristic included in the medical knowledge (see at least ¶¶ [0020], [0047], [0115]-[0116], [0119], and [0121] and the analysis above).
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches that the space is a coordinate space (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
The rationales to modify/combine the teachings of Allen to include the teachings of Lefkofsky are presented above regarding claim 1, 15, and 16 and incorporated herein.
Claim 5: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to determine the first range based on a range statistically determined in a disease same as the disease relating to the patient (see at least ¶ [0076]: the scores obtained from the various reasoning algorithms indicate the extent to which the potential response is inferred by the input question based on the specific area of focus of that reasoning algorithm. Each resulting score is then weighted against a statistical model. The statistical model captures how well the reasoning algorithm performed at establishing the inference between two similar passages for a particular domain during the training period of the QA pipeline. The statistical model is used to summarize a level of confidence that the QA pipeline has regarding the evidence that the potential response, i.e. candidate answer, is inferred by the question; see also at least ¶ [0137]: the large number of scores generated by the various reasoning algorithms are synthesized into confidence scores or confidence measures for the various hypotheses. This process involves applying weights to the various scores, where the weights have been determined through training of the statistical model employed by the QA pipeline 400 and/or dynamically updated; see also at least ¶¶ [0052]-[0053]).
Claim 6: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above.
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches the following limitations:
wherein the processing circuitry is further configured to use, as the coordinate space based on the evaluation parameter, a coordinate space in which evaluations in opposite relationships in the evaluation are shown at both ends of the coordinate axis (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
The rationales to modify/combine the teachings of Allen to include the teachings of Lefkofsky are presented above regarding claim 1, 15, and 16 and incorporated herein.
Claim 7: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to use, as the […] space based on the evaluation parameter, a coordinate space indicating a location in the patient (see at least ¶ [0113]: the questions/responses may in fact also represent the user 306 gathering information from the patient 302 using various medical equipment, e.g., blood pressure monitors, thermometers, wearable health and activity monitoring devices associated with the patient such as a FitBit™, a wearable heart monitor, or any other medical equipment that may monitor one or more medical characteristics of the patient 302. In some cases such medical equipment may be medical equipment typically used in hospitals or medical centers to monitor vital signs and medical conditions of patients that are present in hospital beds for observation or medical treatment; see also at least ¶ [0005]. The data indicates characteristics of the heart—i.e., a location in the patient).
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches that the space is a coordinate space (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
The rationales to modify/combine the teachings of Allen to include the teachings of Lefkofsky are presented above regarding claim 1, 15, and 16 and incorporated herein.
Claim 9: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to
determine a plurality of evaluation parameters relating to the disease (see at least ¶ [0120]: the healthcare cognitive system 300 is augmented to include an ingestion engine 340 that operates to ingest information from the corpus or corpora 322-326, identify any ambiguous portions of content present in the content of the corpus or corpora 322-326 and disambiguate the ambiguous portions of content based on analysis of the surrounding context. In particular, in one illustrative embodiment, the treatment guidance data 324 and other medical corpus and source data 326 may provide guidelines which may be processed to train the disambiguation engine 350 with regard to contexts and their associated context based ambiguous content interpretation rules, in a manner such as previously described above. The contexts and their associated sets of context based ambiguous content interpretation rules—i.e., evaluation parameters—may be stored in the storage 357 of disambiguation engine 350; see also at least ¶ [0122]: many times the patient EMR may have notations or portions of content whose meaning may be ambiguous to the healthcare cognitive system 300 since the meaning is not made explicit in the notation or portion of content itself. As part of an ingestion operation, or in response to a runtime request that initiates processing of a patient EMR 360, these ambiguous notations or portions of content may be identified by the ambiguous content detection logic 344 of the ingestion logic 342. The identified ambiguous content is flagged and provided to the context analysis logic 352 of the disambiguation engine 350 which determines the context—i.e., also evaluation parameters—surrounding the ambiguous content, e.g., ambiguous notation in the patient EMR 322. For example, if the patient's EMR 360 has a notation of “2×4”, this notation is flagged by the ambiguous content detection logic 344 as part of the parsing and natural language processing performed by the ingestion logic 342. The flagged ambiguous notation is identified to the context analysis logic 352 which analyzes the metadata associated with the section of the patient EMR where the ambiguous notation was identified, the key words/phrases in surrounding text, and possibly even correlating the entry in the patient EMR 360 with information from other sources 326, e.g., medical insurance claims information having similar date/time information as the entry in the patient EMR 360, pharmacy prescription fulfillment information, etc.; see also at least ¶¶ [0123]-[0125]), and
respectively map the first range and the second range on the […] space based on the evaluation parameter for each evaluation parameter (see at least ¶ [0119]: data mining processes may be employed to mine the data in sources 322 and 326 to identify evidential data supporting and/or refuting the applicability of the candidate treatments to the particular patient 302 as characterized by the patient's patient attributes 318 and EMRs 322. For example, for each of the criteria of the treatment rule, the results of the data mining provides a set of evidence that supports giving the treatment in the cases where the criterion is “MET” and in cases where the criterion is “NOT MET.” The healthcare cognitive system 300 processes the evidence in accordance with various cognitive logic algorithms to generate a confidence score for each candidate treatment recommendation indicating a confidence that the corresponding candidate treatment recommendation is valid for the patient 302—i.e., ranges. The candidate treatment recommendations may then be ranked according to their confidence scores and presented to the user 306 as a ranked listing of treatment recommendations 328. In some cases, only a highest ranked, or final answer, is returned as the treatment recommendation 328. The treatment recommendation 328 may be presented to the user 306 in a manner that the underlying evidence evaluated by the healthcare cognitive system 300 may be accessible, such as via a drilldown interface, so that the user 306 may identify the reasons why the treatment recommendation 328 is being provided by the healthcare cognitive system 300; see also at least ¶ [0121]: in response to a patient 302 interfacing with the user 306, e.g., a doctor or other medical professional, the user may request decision support from the healthcare cognitive system 300, e.g., a request to generate the most appropriate medical treatment for the medical condition of the patient 302. In response, the healthcare cognitive system 300 may analyze the patient EMR 322 for the patient 302 to gather information about the patient 302 which assists in providing the requested decision support; see also at least ¶¶ [0115]-[0116]).
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches that the space is a coordinate space (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
The rationales to modify/combine the teachings of Allen to include the teachings of Lefkofsky are presented above regarding claim 1, 15, and 16 and incorporated herein.
Claim 12: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to cause a display to display information based on the degree of match between the chief complaint data and the medical knowledge (see at least ¶ [0139]: from the ranked listing of candidate answers, at stage 480, a final answer and confidence score, or final set of candidate answers and confidence scores, are generated and output to the submitter of the original input question via a graphical user interface or other mechanism for outputting information; see also at least ¶¶ [0046], [0079], [0112], and [0147]).
Claim 13: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above.
Allen does not explicitly disclose, but Lefkofsky, as shown, teaches the following limitations:
wherein the processing circuitry is further configured to cause the display to display information in which the first range and the second range are mapped on the coordinate space based on the evaluation parameter (see at least ¶ [0292]: a widget UIE may provide selections pertaining to one or more supported code snippets for the active kernel. Code snippets may include code for creating visualizations such as a graph or a plot, code for simple arithmetic operations such as calculating a mean or a standard deviation, or code for more complex operations such as calculating a distribution and displaying a respective curve; see also the graph depicted as element 3240B in FIG. 32; see also at least ¶¶ [0128] and [0295]-[0297]).
The rationales to modify/combine the teachings of Allen to include the teachings of Lefkofsky are presented above regarding claim 6 and incorporated herein.
Claim 14: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to cause the display to display an interview content to the patient based on the degree of match between the chief complaint data and the medical knowledge (see at least ¶ [0046]: the system may output to the user that the context of the ambiguous notation is “laceration” and that the pattern “2×4” means “2 pills every 4 hours” and the user may indicate whether that interpretation is correct or not via a user interface. The user may indicate that the interpretation is incorrect and may provide the correct interpretation, e.g., context is “laceration” and the pattern “2×4” means “2 cm deep and 4 cm in length.” The system may then update its context based interpretation rules to reflect the correct interpretation by setting the appropriate features of the context based interpretation rule to the correct settings. In this way, the system learns over time the correct way in which to interpret ambiguous portions of content such that the context based interpretation rules may be applied to future instances of ambiguous content; see also at least ¶ [0079]: the most probable answers are output as a ranked listing of candidate answers ranked according to their relative scores or confidence measures calculated during evaluation of the candidate answers, as a single final answer having a highest ranking score or confidence measure, or which is a best match to the input question, or a combination of ranked listing and final answer; see also at least ¶ [0112]: the interactions 304, 314, 316, and 330 between the patient 302 and the user 306 may be performed orally, e.g., a doctor interviewing a patient, and may involve the use of one or more medical instruments, monitoring devices, or the like, to collect information that may be input to the healthcare cognitive system 300 as patient attributes 318; see also at least ¶¶ [0121], [0123]-[0125], [0139], and [0147]).
Claim 17: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the processing circuitry is further configured to
set a weight for a characteristic (see at least ¶ [0076]: the scores obtained from the various reasoning algorithms indicate the extent to which the potential response is inferred by the input question based on the specific area of focus of that reasoning algorithm. Each resulting score is then weighted against a statistical model. The statistical model captures how well the reasoning algorithm performed at establishing the inference between two similar passages for a particular domain during the training period of the QA pipeline. The statistical model is used to summarize a level of confidence that the QA pipeline has regarding the evidence that the potential response, i.e. candidate answer, is inferred by the question. This process is repeated for each of the candidate answers until the QA pipeline identifies candidate answers that surface as being significantly stronger than others and thus, generates a final answer, or ranked set of answers, for the input question; see also at least ¶ [0137]: the large number of scores generated by the various reasoning algorithms are synthesized into confidence scores or confidence measures for the various hypotheses. This process involves applying weights to the various scores, where the weights have been determined through training of the statistical model employed by the QA pipeline 400 and/or dynamically updated. For example, the weights for scores generated by algorithms that identify exactly matching terms and synonym may be set relatively higher than other algorithms that are evaluating publication dates for evidence passages. The weights themselves may be specified by subject matter experts or learned through machine learning processes that evaluate the significance of characteristics evidence passages and their relative importance to overall candidate answer generation); and
calculate the degree of match based on the weight and the area in where the first range and the second range overlap each other (see at least ¶¶ [0076] and [0137] and the analysis above; see also at least ¶ [0046]; see also at least ¶ [0079]: the most probable answers are output as a ranked listing of candidate answers ranked according to their relative scores or confidence measures calculated during evaluation of the candidate answers, as a single final answer having a highest ranking score or confidence measure, or which is a best match to the input question, or a combination of ranked listing and final answer; see also at least ¶ [0112]: the interactions 304, 314, 316, and 330 between the patient 302 and the user 306 may be performed orally, e.g., a doctor interviewing a patient, and may involve the use of one or more medical instruments, monitoring devices, or the like, to collect information that may be input to the healthcare cognitive system 300 as patient attributes 318; see also at least ¶¶ [0121], [0123]-[0125], [0139], and [0147]).
Claim 18: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above. Further, Allen, as shown, discloses the following limitations:
wherein the weight is set higher for a characteristic that should resolve ambiguity (see at least ¶ [0076]; see also at least ¶ [0137]: the large number of scores generated by the various reasoning algorithms are synthesized into confidence scores or confidence measures for the various hypotheses. This process involves applying weights to the various scores, where the weights have been determined through training of the statistical model employed by the QA pipeline 400 and/or dynamically updated. For example, the weights for scores generated by algorithms that identify exactly matching terms and synonym may be set relatively higher than other algorithms that are evaluating publication dates for evidence passages. The weights themselves may be specified by subject matter experts or learned through machine learning processes that evaluate the significance of characteristics evidence passages and their relative importance to overall candidate answer generation; see also at least ¶¶ [0121], [0123]-[0125], [0139], and [0147]).
Claim 19 is rejected under AIA 35 U.S.C. § 103 as being unpatentable over Allen et al. (U.S. Pub. No. 2021/0343415 A1) (hereinafter “Allen”) in view of Lefkofsky et al. (U.S. Pub. No. 2021/0125731 A1) (hereinafter “Lefkofsky”) and further in view of Vashist et al. (U.S. Pub. No. 2022/0253729 A1) (hereinafter “Vashist”).
Claim 19: The combination of Allen and Lefkofsky teaches the limitations as shown in the rejections above.
Allen and Lefkofsky do not explicitly disclose, but Vashist, as shown, teaches the following limitations:
wherein the processing circuitry is further configured to calculate a cosine similarity between a vector indicating the first range and a vector indicating the second range as the degree of match between the chief complaint data and the medical knowledge (see at least ¶ [0021]: one technique for measuring similarity is by computing a distance measure (for example, an L2 distance, a cosine distance, a Manhattan distance, etc.). The knowledge driven solution described herein adopts such an approach for clinical trial results and/or outcome detection with the belief that there will be regularities in the use of terminology when multiple authors discuss clinical trial findings; see also at least ¶ [0055]: the similarity score, which is also referred to herein interchangeably as a similarity metric, refers to a distance between two feature vectors in a feature space formed based on the dimensionality of the text token feature vectors. In some embodiments, the distance between two feature vectors, refers to a Euclidian distance, an L2 distance, a cosine distance, a Minkowski distance, a Hamming distance, or any other vector space distance measure, or a combination thereof; see also at least ¶ [0092]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the knowledge generation techniques taught by Vashist with the medical management systems disclosed by Allen (as modified by Lefkofsky), because Vashist teaches at ¶ [0110] that these techniques leverage “regularities in language used for the target aspect and uses the regularities to predict that an input sentence is similar to a sentence previously known to relate to the target aspect and a ranking of sentence similarity using latent semantic indexing.” See also M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the knowledge generation techniques taught by Vashist with the medical management systems disclosed by Allen (as modified by Lefkofsky), because the claimed invention is merely a combination of old elements (the knowledge generation techniques taught by Vashist, the information processing and display techniques taught by Lefkofsky, and the medical management systems disclosed by Allen), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Response to Arguments
The arguments submitted with the Reply have been fully considered but are not persuasive.
Response to Arguments Under 35 U.S.C. § 101
Applicant argues that “the processing circuitry has the technical improvement of simplifying and reducing the process involved with calculating the relevance between chief complaint data and medical knowledge. The improved method increases the efficiency of performing more accurate diagnoses.” Reply, p. 9. Examiner disagrees, because the bulk of the processing in the claims has been identified as being directed toward abstract ideas. To the extent that the claims rely on computer implementation—and not all of the independent claims do—inclusion of these elements amounts to mere instructions to implement the identified abstract ideas on a computer. See M.P.E.P. § 2106.05(f). No aspect of the computer itself appear to be improved when simply automating the identified abstract ideas.
Applicant argues that “determining the ranges and calculating the degree of match based upon overlapping range is believed to be significantly more than an abstract idea” because “These processes allow for more efficient determination of overlap reducing processing speed, which is a technical operation.” Reply, p. 10. Applicant presents similar arguments regarding claims 17-19. Reply, pp. 10-11. Examiner disagrees, because this alleged improvement would be to the abstract idea, not anything relating to the technical aspects of the claimed invention. See SAP Am., Inc. v. InvestPic, LLC, No. 2017-2081, slip op. at 14 (Fed. Cir. Aug. 2, 2018) (“What is needed is an inventive concept in the non-abstract application realm. … [L]imitation of the claims to a particular field of information … does not move the claims out of the realm of abstract ideas.”). Moreover, “[A] claim for a new abstract idea is still an abstract idea.” Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (emphasis added). “[U]nder the Mayo/Alice framework, a claim directed to a newly discovered law of nature (or natural phenomenon or abstract idea) cannot rely on the novelty of that discovery for the inventive concept necessary for patent eligibility ….” Genetic Techs. Ltd. v. Merial L.L.C., 818 F.3d 1369, 1376 (Fed. Cir. 2016) (citations omitted).
Applicant argues several of the features “are not believed to be practically performed in the mind.” Reply, p. 10. Examiner disagrees, because a person can look at data, determine two ranges (e.g., 40-70% and 60-80%), and determine an overlap in these ranges for different considerations. A person can similarly perform operations such as adjusting weights or calculating a cosine similarity by mentally performing these mathematical operations.
Response to Arguments Under 35 U.S.C. §§ 102 and 103
Applicant argues that the combination of applied reference do not teach or suggest all of the features of the amended independent claims. Reply, p. 12. Examiner disagrees for the reasons presented in the revised rejections under § 103.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. The following references have been cited to further show the state of the art with respect to health information management, analysis, and presentation.
Cabrera, JR. et al. (U.S. Pub. No. 2018/0042558 A1) (health data visualization and user support tools);
Chen et al. (“Mapping of diseases from clinical medicine research—a visualization study.” Scientometrics 125 (2020): 171-185).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher Tokarczyk, whose telephone number is 571-272-9594. The examiner can normally be reached Monday-Thursday between 6:00 AM and 4:00 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid, can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER B TOKARCZYK/ Primary Examiner, Art Unit 3687