DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed
invention is directed to an abstract idea without significantly more.
This analysis is based on the 2024 Guidance Update on Patent Subject Matter
Eligibility, Including on Artificial Intelligence (2024 AI SME Update) published on July 17,
2024 (89 FR 58128).
Step 1:
Claims 1-20 are directed to a system, non-transitory computer-readable storage medium or method which fall under the statutory categories of invention of
machines. Therefore, step 1 is met.
Step 2A, Prong 1:
Claims 1, 14 and 17 recite “a processor” or “computer system” for “dynamically determining related content based at least in part on one or more prior expert selections that are associated with the selected content, one or more predefined or predetermined relationships with the selected content, or both” The limitation, excluding the processor/computer, therefore falls within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. Under its broadest reasonable interpretation when read in light of the specification, the use of one or more neural networks encompasses mental processes practically performed in the human mind. See MPEP 2106.04(a)(2), subsection III.
Dependent claims 2-11, 15, 16, 18 and 19 do not add any specific functions of the processes performed by the above machines, but rather further characterize the related content. Therefore, these claims do not resolve the issues of the claims including mental processes.
Step 2A, Prong 2:
The limitations of claims 1, 14 and 17 are recited as being performed by a “processor” or “computer system”. The computer/circuitry is recited at a high level of generality. The computer/circuitry is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f), which provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
In evaluation of whether the invention integrates into a practical application, it should be clear that the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a).
According to the specification, the improvement is to present measurement data in an easily understood manner and to pare down the data a user needs to review. However, this is not clearly reflected in the language of the claims, as “providing the selected content and the determined related content” amounts to retrieving all possible related content without mention of how it is presented in an intuitive way.
Step 2B:
In claims 1, 14 and 17, the limitations of “receiving information specifying selected content” and “providing the selected content and the determined related content” amount to merely receiving data and outputting data. These limitations are considered to be insignificant extra-solution activity. In consideration, these limitations are further evaluated to take into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). Receiving and presenting data is very well understood and routine in the field and therefore these do not add an inventive concept to the claims.
Dependent claims 12, 13 and 20 add limitations that further clarify the providing step without adding significant or unconventional activity to the claimed invention. Therefore, these do not resolve the issues as put forth above.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Min et al. (US 2022/0392065).
Regarding claim 1, Min et al. discloses a computer system, comprising:
a processor configured to execute program instructions (“computer processor” at paragraph 0014, second to last line; see also paragraph 1855); and
memory storing the program instructions, wherein, when executed by the processor (“electronic storage medium” at paragraph 0014, second to last line; see also paragraph 1855), the program instructions cause the computer system to perform operations comprising:
receiving information specifying selected content (“receiving an input of a request to generate the medical report for a patient, the request indicating a format for the medical report; receiving patient information relating to the patient, the patient information associated with the report generation request” at paragraph 0013, line 7);
dynamically determining related content (“In some embodiments, the system is configured to dynamically generate a patient-specific report based on the analysis of the processed data generated from the raw CT scan data. In some embodiments, the patient specific report is dynamically generated based on the processed data. In some embodiments, the written report is dynamically generated based on selecting and/or combining certain phrases from a database, wherein certain words, terms, and/or phrases are altered to be specific to the patient and the identified medical issues of the patient. In some embodiments, the system is configured to dynamically select one or more images from the image scanning data and/or the system generated image views described herein, wherein the selected one or more images are dynamically inserted into the written report in order to generate a patient-specific report based on the analysis of the processed data” at paragraph 0339) based at least in part on one or more prior expert selections that are associated with the selected content (“In some embodiments, at block 382, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the determined progression or regression of plaque and/or any other related measurement, condition, assessment, or related disease based on the comparison of the one or more parameters derived from two or more scans” at paragraph 0300; “Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque” at paragraph 0194, last sentence; the CNN is trained using annotated images as described in the last sentence of paragraph 1076, which constitutes an expert selection), one or more predefined or predetermined relationships with the selected content (“10) Can be compared to a normal reference population value. In some cases, there may be findings that, to maximize patient understanding, can be compared to normative reference values that are derived from population-based cohorts or other disease cohorts. This may be provided in percentile, by age comparison (e.g., heart age versus biological age), or by visual display (e.g., on a bell-shaped curve or histogram).” at paragraph 0914; “4) Give explanations of the results. In addition to presenting the results directly to the patient, an explanation of the meaning of the results can then be presented simultaneously. This is performed using defined aggregation algorithms with previously recorded definitions and discussions of the range of results expected for an individual test. For example, in the case of the cardiac CT angiogram report, we will develop short explanations of the significance of the result of narrowing of a blood vessel” at paragraph 0908, line 1), or both; and
providing the selected content and the determined related content (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence).
Regarding claim 14, Min et al. discloses a non-transitory computer-readable storage medium for use in conjunction with a computer system, the computer-readable storage medium configured to store a program module that, when executed by the computer system (“Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium” at paragraph 01719, line 1), causes the computer system to perform operations comprising:
receiving information specifying selected content (“receiving an input of a request to generate the medical report for a patient, the request indicating a format for the medical report; receiving patient information relating to the patient, the patient information associated with the report generation request” at paragraph 0013, line 7);
dynamically determining related content (“In some embodiments, the system is configured to dynamically generate a patient-specific report based on the analysis of the processed data generated from the raw CT scan data. In some embodiments, the patient specific report is dynamically generated based on the processed data. In some embodiments, the written report is dynamically generated based on selecting and/or combining certain phrases from a database, wherein certain words, terms, and/or phrases are altered to be specific to the patient and the identified medical issues of the patient. In some embodiments, the system is configured to dynamically select one or more images from the image scanning data and/or the system generated image views described herein, wherein the selected one or more images are dynamically inserted into the written report in order to generate a patient-specific report based on the analysis of the processed data” at paragraph 0339) based at least in part on one or more prior expert selections that are associated with the selected content (“In some embodiments, at block 382, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the determined progression or regression of plaque and/or any other related measurement, condition, assessment, or related disease based on the comparison of the one or more parameters derived from two or more scans” at paragraph 0300; “Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque” at paragraph 0194, last sentence; the CNN is trained using annotated images as described in the last sentence of paragraph 1076, which constitutes an expert selection), one or more predefined or predetermined relationships with the selected content (“10) Can be compared to a normal reference population value. In some cases, there may be findings that, to maximize patient understanding, can be compared to normative reference values that are derived from population-based cohorts or other disease cohorts. This may be provided in percentile, by age comparison (e.g., heart age versus biological age), or by visual display (e.g., on a bell-shaped curve or histogram).” at paragraph 0914; “4) Give explanations of the results. In addition to presenting the results directly to the patient, an explanation of the meaning of the results can then be presented simultaneously. This is performed using defined aggregation algorithms with previously recorded definitions and discussions of the range of results expected for an individual test. For example, in the case of the cardiac CT angiogram report, we will develop short explanations of the significance of the result of narrowing of a blood vessel” at paragraph 0908, line 1), or both; and
providing the selected content and the determined related content (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence).
Regarding claim 17, Min et al. discloses a method for dynamically determining related content, comprising: by a computer system:
receiving information specifying selected content (“receiving an input of a request to generate the medical report for a patient, the request indicating a format for the medical report; receiving patient information relating to the patient, the patient information associated with the report generation request” at paragraph 0013, line 7);
dynamically determining related content (“In some embodiments, the system is configured to dynamically generate a patient-specific report based on the analysis of the processed data generated from the raw CT scan data. In some embodiments, the patient specific report is dynamically generated based on the processed data. In some embodiments, the written report is dynamically generated based on selecting and/or combining certain phrases from a database, wherein certain words, terms, and/or phrases are altered to be specific to the patient and the identified medical issues of the patient. In some embodiments, the system is configured to dynamically select one or more images from the image scanning data and/or the system generated image views described herein, wherein the selected one or more images are dynamically inserted into the written report in order to generate a patient-specific report based on the analysis of the processed data” at paragraph 0339) based at least in part on one or more prior expert selections that are associated with the selected content (“In some embodiments, at block 382, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the determined progression or regression of plaque and/or any other related measurement, condition, assessment, or related disease based on the comparison of the one or more parameters derived from two or more scans” at paragraph 0300; “Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque” at paragraph 0194, last sentence; the CNN is trained using annotated images as described in the last sentence of paragraph 1076, which constitutes an expert selection), one or more predefined or predetermined relationships with the selected content (“10) Can be compared to a normal reference population value. In some cases, there may be findings that, to maximize patient understanding, can be compared to normative reference values that are derived from population-based cohorts or other disease cohorts. This may be provided in percentile, by age comparison (e.g., heart age versus biological age), or by visual display (e.g., on a bell-shaped curve or histogram).” at paragraph 0914; “4) Give explanations of the results. In addition to presenting the results directly to the patient, an explanation of the meaning of the results can then be presented simultaneously. This is performed using defined aggregation algorithms with previously recorded definitions and discussions of the range of results expected for an individual test. For example, in the case of the cardiac CT angiogram report, we will develop short explanations of the significance of the result of narrowing of a blood vessel” at paragraph 0908, line 1), or both; and
providing the selected content and the determined related content (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence).
Regarding claim 2, Min et al. discloses a system wherein the selected content comprises an image associated with a non-invasive characterization technique (“The images can be from a CT, MRI, ultrasound, or other type of scanner” at paragraph 0928, line 11).
Regarding claim 3, Min et al. discloses a system wherein the non-invasive characterization technique comprises: magnetic resonance imaging (MRI), computed tomography, x-ray imaging or ultrasound (“The images can be from a CT, MRI, ultrasound, or other type of scanner” at paragraph 0928, line 11).
Regarding claim 4, Min et al. discloses a system wherein the image corresponds to a portion of a human body (“In some embodiments, the system can be configured to utilize a vessel identification algorithm to identify and/or analyze one or more vessels within the medical image” at paragraph 0194, line 1; this is notably for a human patient).
Regarding claim 5, Min et al. discloses a system wherein the related content comprises a measurement associated with a different measurement technique than the non-invasive characterization technique (“In some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on results from blood chemistry or biomarker tests of the patient, for example whether certain blood chemistry or biomarker tests of the patient exceed certain threshold levels. In some embodiments, the system is configured to receive as input from the user or other systems and/or access blood chemistry or biomarker tests data of the patient from a database system” at paragraph 0320, line 13).
Regarding claim 6, Min et al. discloses a system wherein the selected content comprises or is associated with a biomarker, and the one or more predefined or predetermined relationships are between the biomarker and one or more additional biomarkers (“In some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on results from blood chemistry or biomarker tests of the patient, for example whether certain blood chemistry or biomarker tests of the patient exceed certain threshold levels. In some embodiments, the system is configured to receive as input from the user or other systems and/or access blood chemistry or biomarker tests data of the patient from a database system” at paragraph 0320, line 13).
Regarding claim 7, Min et al. discloses a system wherein the one or more predefined or predetermined relationships are based at least in part on: genetics information, chemistry, a behavior, an environmental factor, a disease (“4) Give explanations of the results. In addition to presenting the results directly to the patient, an explanation of the meaning of the results can then be presented simultaneously. This is performed using defined aggregation algorithms with previously recorded definitions and discussions of the range of results expected for an individual test. For example, in the case of the cardiac CT angiogram report, we will develop short explanations of the significance of the result of narrowing of a blood vessel” at paragraph 0908, line 1), a comorbidity or a biological pathway.
Regarding claim 8, Min et al. discloses a system wherein the one or more predefined or predetermined relationships may be based at least in part on medical literature (“As discussed herein, in some embodiments, the system is configured to take the guesswork out of interpretation of medical images and provide substantially exact and/or substantially accurate calculations or estimates of stenosis percentage, atherosclerosis, and/or Coronary Artery Disease—Reporting and Data System (CAD-RADS) score as derived from a medical image” at paragraph 0272, line 1; as evidenced by Paul, CAD-RADS is derived from medical literature: “The present invention relates to an automated determination of the value according to the CAD-RADS classification (Cury RC et al, “Coronary Artery Disease-Reporting and Data System (CAD-RADS): An Expert Consensus Document of SCT, ACR and NCI: Assessed by the ACC.” Cardioverter Cardiovasc Imaging 2016 Sep; 9 (9): 1099-1113)” at paragraph 0028, line 1).
Regarding claim 9, Min et al. discloses a system wherein the content is associated with a patient, and the one or more prior expert selections are made by a physician and are associated with one or more different patients from the patient (“Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque” at paragraph 0194, last sentence; the CNN is trained using annotated images as described in the last sentence of paragraph 1076, which constitutes an expert selection, generally trained on other patient images).
Regarding claim 10, Min et al. discloses a system wherein the patient and the one or more different patients share one or more characteristics or attributes (by utilizing a trained neural network to identify and diagnose vessel disease, it is implied therefore that the training images constitute images of patient vessels).
Regarding claim 11, Min et al. discloses a system wherein the patient and the one or more different patients have a disease, a risk of having the disease (by utilizing a trained neural network to identify and diagnose vessel disease, it is implied therefore that the training images constitute images of patient vessels that either have vessel disease or have a risk of the vessel disease).
Regarding claim 12, Min et al. discloses a system wherein providing the selected content and the determined related content comprises displaying the selected content and the determined related content on a display (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence).
Regarding claim 13, Min et al. discloses a system wherein providing the determined related content comprises providing a user interface with user-interface features corresponding to the determined related content (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence; “n some embodiments, the system can be further configured to dynamically and/or automatically generate a visualization of the identified, quantified, and/or classified one or more coronary arteries and/or plaque, for example in the form of a graphical user interface” at paragraph 1498, second to last sentence).
Regarding claims 15 and 18, Min et al. discloses a medium and method wherein the selected content comprises an image associated with a non-invasive characterization technique (“The images can be from a CT, MRI, ultrasound, or other type of scanner” at paragraph 0928, line 11); and
wherein the related content comprises a measurement associated with a different measurement technique than the non-invasive characterization technique (“In some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on results from blood chemistry or biomarker tests of the patient, for example whether certain blood chemistry or biomarker tests of the patient exceed certain threshold levels. In some embodiments, the system is configured to receive as input from the user or other systems and/or access blood chemistry or biomarker tests data of the patient from a database system” at paragraph 0320, line 13).
Regarding claims 16 and 19, Min et al. discloses a medium and method wherein the content is associated with a patient, and the one or more prior expert selections are made by a physician and are associated with one or more different patients from the patient (“Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque” at paragraph 0194, last sentence; the CNN is trained using annotated images as described in the last sentence of paragraph 1076, which constitutes an expert selection, generally trained on other patient images).
Regarding claim 20, Min et al. discloses a method wherein providing the determined related content comprises providing a user interface with user-interface features corresponding to the determined related content (“In some embodiments, the system can be configured to generate a graphical representation and/or report at block 2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation to FIGS. 28C-28D” at paragraph 1714, last sentence; “n some embodiments, the system can be further configured to dynamically and/or automatically generate a visualization of the identified, quantified, and/or classified one or more coronary arteries and/or plaque, for example in the form of a graphical user interface” at paragraph 1498, second to last sentence).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATRINA R FUJITA/Primary Examiner, Art Unit 2672