DETAILED ACTION
Notice to Applicant
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in reply to the filed on 1/9/2023.
Claims 1-20 currently pending and have been examined.
Information Disclosure Statement
The Information Disclosure Statement filed on 1/9/2023 has been considered. An initialed copy of the Form 1449 is enclosed herewith.
Priority
Applicant’s claim for the benefit of prior-filed applications (European provisional application EP 22151001, filed 1/9/2023) under 35 U.S.C. 110(e) or under 35 U.S.C. 120, 121, or 365(c), or under 35 U.S.C. 119(a)-(d) or (f) is acknowledged.
Claim Rejections - 35 USC § 112
112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Rejection
Claims 1-20 rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Mixed Statutory Class
Claims 1-11 and 13 are directed to a device and method for using said device. Claims 1-11 and 13 are of mixed statutory type. It has been held that a claim that recites both an apparatus and a method for using said apparatus (mixed statutory type) is indefinite under section 112, paragraph 2, as such a claim does not sufficiently and precisely describe the invention as to provide competitors with an accurate determination of the metes and bounds of protection involved (IPXL Holdings LLC v. Amazon.com Inc., 77 USPQd 1140 (CA FC 2005); Ex parte Lyell, 17 USPQ2d 1548). Claims 1-11 and 13 are thereby rejected and appropriate correction is required.
Claims 1-11, 16-20 and 14 are directed to an apparatus and method for using said apparatus. Claims 1-11, 16-20 and 14 are of mixed statutory type. It has been held that a claim that recites both an apparatus and a method for using said apparatus (mixed statutory type) is indefinite under section 112, paragraph 2, as such a claim does not sufficiently and precisely describe the invention as to provide competitors with an accurate determination of the metes and bounds of protection involved (IPXL Holdings LLC v. Amazon.com Inc., 77 USPQd 1140 (CA FC 2005); Ex parte Lyell, 17 USPQ2d 1548). Claims 1-11, 16-20 and 14 are thereby rejected and appropriate correction is required.
Claims 1-11, 16-20 and 15 are directed to a computer-readable storage medium and method for using said computer-readable storage medium. Claims 1-11, 16-20 are of mixed statutory type. It has been held that a claim that recites both an apparatus and a method for using said apparatus (mixed statutory type) is indefinite under section 112, paragraph 2, as such a claim does not sufficiently and precisely describe the invention as to provide competitors with an accurate determination of the metes and bounds of protection involved (IPXL Holdings LLC v. Amazon.com Inc., 77 USPQd 1140 (CA FC 2005); Ex parte Lyell, 17 USPQ2d 1548). Claims 1-11, 16-20 and 15 are thereby rejected and appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Human Interactions Organized
Applicant discloses (Applicant’s Specification, [0007]) that there is a need for advanced techniques of assessing a performance of a trained ML algorithm. So a need exists to organize these human interactions by/through determining performance of trained ML algorithms using the steps of “obtaining validated radiology reports, parsing validated radiology reports, generating predictions, determining performances,” etc. Applicant’s method/computer readable medium/apparatus is therefore a certain method of organizing the human activities as described and disclosed by Applicant.
Rejection
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim(s) 1, 12, 13, 14 and 15 is/are directed to the abstract idea of “determining performance of trained ML algorithms,” etc. (Applicant’s Specification, Abstract, paragraph(s) [0009]), etc., as explained in detail below, and thus grouped as a certain method of organizing human interactions. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. Accordingly, claims 1-20 recite an abstract idea.
Step 2A Prong 1 – The Judicial Exception
The claim(s) recite(s) in part, method/computer readable medium/apparatus for performing the steps of “obtaining validated radiology reports, parsing validated radiology reports, generating predictions, determining performances,” etc., that is “determining performance of trained ML algorithms,” etc. which is a method of managing personal behavior or relationships or interactions between people (social activities, teaching, following rules, instructions) and thus grouped as a certain method of organizing human interactions. Accordingly, claims 1-20 recite an abstract idea.
Step 2A Prong 2 – Integration of the Judicial Exception into a Practical Application
This judicial exception is not integrated into a practical application because the generically recited additional computer elements (i.e. microcontrollers, graphics processor units, integrated circuits, memory devices, processors, computational devices, medical imaging equipment (Applicant’s Specification [0021], [0047]-[0049]), etc.) to perform steps of “obtaining validated radiology reports, parsing validated radiology reports, generating predictions, determining performances,” etc. do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and this is nothing more than an attempt to generally link the product of nature to a particular technological environment. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limit on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea.
Insignificant extra-solution activity
Claim(s) 1-20 recites storing data steps, retrieving data steps, providing data steps, output steps (Bilski v. Kappos, 561 U.S. 593, 610-12 (2010), Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Can., 771 F.Supp.2d 1054, 1066 (E.D. Mo. 2011), aff’d, 687 F.3d at 1266), and/or transmitting data step (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014), Apple, Inc. v. Ameranth, Inc., 842 F.3d 1299, 1241-42 (Fed. Cir. 2016)) that is/are insignificant extra-solution activity. Extra-solution activity limitations are insufficient to transform judicially excepted subject matter into a patent-eligible application (MPEP §2106.05(g)).
Step 2B – Search for an Inventive Concept/Significantly More
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration into a practical application, the additional elements (i.e. microcontrollers, graphics processor units, integrated circuits, memory devices, processors, computational devices, medical imaging equipment, etc.) are recited at a high level of generality, and the written description indicates that these elements are generic computer components. Using generic computer components to perform abstract ideas does not provide a necessary inventive concept (Alice, 573 U.S. at 223 (“mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”)). Accordingly, the claims are not patent eligible.
Individually and in Combination
The additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The additional elements amount to no more than generic computer components that serve to merely link the abstract idea to a particular technological environment (i.e. microcontrollers, graphics processor units, integrated circuits, memory devices, processors, computational devices, medical imaging equipment, etc.). At paragraph(s) [0021], [0047]-[0049], Applicant’s specification describes generic computer hardware for implementing the above described functions including “microcontrollers, graphics processor units, integrated circuits, memory devices, processors, computational devices, medical imaging equipment,” etc. to perform the functions of “obtaining validated radiology reports, parsing validated radiology reports, generating predictions, determining performances,” etc. The recited “microcontrollers, graphics processor units, integrated circuits, memory devices, processors, computational devices, medical imaging equipment,” etc. does/do not add meaningful limitations to the idea of beyond generally linking the system to a particular technological environment, that is, implementation via computers. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer, or improves any other technology, or improves a technical field, or provides a technical improvement to a technical problem. Their collective functions merely provide generic computer implementation. Therefore, claims 1-20 do not amount to significantly more than the underlying abstract idea of “an idea of itself” (Alice).
Dependent Claims
Dependent claim(s) 2-11 and 16-20 include(s) all the limitations of the parent claims and are directed to the same abstract idea as discussed above and incorporated herein.
Although dependent claims 2-11 and 16-20 add additional limitations, they only serve to further limit the abstract idea by reciting limitations on what the information is and how it is received and used. Dependent claims 2-11 and 16-20 merely describe physical structures to implement the abstract idea. These information and physical characteristics do not change the fundamental analogy to the abstract idea grouping of certain method of organizing human interactions, and when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea. Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore, the claims when taken as a whole are ineligible for the same reasons as independent claim(s) 1, 12, 13, 14 and 15.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Putha et al. (US 2020/0151871), in view of Vianu et al. (US 2020/0334809).
CLAIM 1
As per claim 1, Putha et al. disclose:
A computer-implemented method (Putha et al., [0002] methods), comprising:
obtaining a validated radiology report of a patient and medical imaging data of the patient associated with the validated radiology report (Putha et al., Figure 1, Figure 3);
parsing the validated radiology report to obtain a validated label of at least one diagnosis (Putha et al., [0040] Natural language processing algorithms were developed to parse unstructured radiology reports…);
generating, by a trained machine-learning algorithm at a computing device, a prediction of the at least one diagnosis based on the medical imaging data (Putha et al., [0013] Predicting the presence/absence of a particular type of medical abnormalities by combining the predictions of multiple models, wherein the models are selected using various heuristics), [0035] machine learning analysis (such as trained models of image detection of certain medical conditions).
Putha et al. fail to expressly disclose:
determining a performance of the trained machine-learning algorithm based on a comparison of the validated label of the at least one diagnosis and the prediction of the at least one diagnosis.
However, Vianu et al. teach:
determining a performance of the trained machine-learning algorithm based on a comparison of the validated label of the at least one diagnosis and the prediction of the at least one diagnosis (Vianu et al., [0061] These measures of uncertainty will be based on quantitative assessments of the computer-implemented algorithm's performance in training and validation datasets. The measures of uncertainty may also incorporate measures of the underlying variability in accuracy of the training and validation datasets themselves.).
One of ordinary skill in the art before the effective filing date would have found it obvious to include “determining a performance of the trained machine-learning algorithm based on a comparison of the validated label of the at least one diagnosis and the prediction of the at least one diagnosis,” etc. as taught by Vianu et al. within the method as taught by the Putha et al. with the motivation of providing diagnostic error detection (Vianu et al., [0002]).
CLAIM 2
As per claim 2, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
further comprising: triggering an update of parameters of the trained machine-learning algorithm based on the validated label in response to the performance of the trained machine-learning algorithm being lower than a threshold (Putha et al., Figure 1).
CLAIM 3
As per claim 3, Putha et al. and Vianu et al.
teach the method of claim 2 and further disclose the limitations of:
further comprising: providing, to a central computing device, the updated parameters of the trained machine-learning algorithm; and upon providing the updated parameters, receiving, from the central computing device, an update of the trained machine-learning algorithm (Putha et al., Figure 1, Figure 3).
CLAIM 4
As per claim 4, Putha et al. and Vianu et al.
teach the method of claim 3 and further disclose the limitations of:
wherein the update of the trained machine-learning algorithm is performed by the central computing device using at least one of secure aggregation or federated averaging based on the updated parameters of the trained machine-learning algorithm and on at least one additional update of the parameters of the trained machine-learning algorithm, the at least one additional update of the parameters being received by the central computing device from one or more additional computing devices running the trained machine-learning algorithm (Putha et al., [0035] machine learning analysis (such as trained models of image detection of certain medical conditions)).
CLAIM 5
As per claim 5, Putha et al. and Vianu et al.
teach the method of claim 2 and further disclose the limitations of:
further comprising: receiving, at the computing device from one or more additional computing devices running the trained machine-learning algorithm, at least one additional update of the parameters of the trained machine-learning algorithm; and determining an update of the trained machine-learning algorithm using at least one of secure aggregation or federated averaging based on the updated parameters and on the at least one additional update of the parameters (Putha et al., Figure 1, Figure 3).
CLAIM 6
As per claim 6, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
further comprising: selecting the trained machine-learning algorithm from a plurality of trained machine-learning algorithms based on the validated label of at least one diagnosis (Putha et al., [0035] machine learning analysis (such as trained models of image detection of certain medical conditions)).
CLAIM 7
As per claim 7, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
wherein the validated radiology report includes a structured report, and wherein said parsing of the validated radiology report includes extracting the validated label of at least one diagnosis (Vianu et al., [0021] ...one or more of the plurality of training data pairs are obtained from a database of structured checklists corresponding to medical diagnostic data, the medical diagnostic data including radiological reports and radiological exam images.).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 8
As per claim 8, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
wherein the validated radiology report includes a free-text report, and wherein said parsing of the validated radiology report includes applying at least one language agnostic and context aware text mining method to the validated radiology report, or applying at least one language-specific text mining method to the validated radiology report (Vianu et al., [0079] free-text notes).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 9
As per claim 9, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
wherein the performance is indicated by a deviation between the validated label of the at least one diagnosis and the prediction of the at least one diagnosis (Vianu et al., [0061] These measures of uncertainty will be based on quantitative assessments of the computer-implemented algorithm's performance in training and validation datasets. The measures of uncertainty may also incorporate measures of the underlying variability in accuracy of the training and validation datasets themselves.).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 10
As per claim 10, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
wherein the at least one diagnosis includes at least one of (i) an anatomical site of at least one abnormality, (ii) a size of the at least one abnormality, or (iii) a name of the at least one abnormality (Putha et al., Figure 2 Abnormal, [0004] the abnormality being detected).
CLAIM 11
As per claim 11, Putha et al. and Vianu et al.
teach the method of claim 1 and further disclose the limitations of:
further comprising: obtaining a further validated radiology report of a further patient and further medical imaging data of the further patient associated with the further validated radiology report; parsing the further validated radiology report to obtain a further validated label of the at least one diagnosis; generating, by the trained machine-learning algorithm at the computing device, a further prediction of the at least one diagnosis based on the further medical imaging data; and wherein the determining of the performance of the trained machine-learning algorithm is further based on a further comparison of the further validated label of the at least one diagnosis and the further prediction of the at least one diagnosis (Putha et al., Figure 1, Figure 3).
CLAIM 16
As per claim 16, Putha et al. and Vianu et al.
teach the method of claim 2 and further disclose the limitations of:
further comprising: selecting the trained machine-learning algorithm from a plurality of trained machine-learning algorithms based on the validated label of at least one diagnosis (Putha et al., [0035] machine learning analysis (such as trained models of image detection of certain medical conditions)).
CLAIM 17
As per claim 17, Putha et al. and Vianu et al.
teach the method of claim 5 and further disclose the limitations of:
further comprising: selecting the trained machine-learning algorithm from a plurality of trained machine-learning algorithms based on the validated label of at least one diagnosis (Putha et al., [0035] machine learning analysis (such as trained models of image detection of certain medical conditions)).
CLAIM 18
As per claim 18, Putha et al. and Vianu et al.
teach the method of claim 5 and further disclose the limitations of:
wherein the performance is indicated by a deviation between the validated label of the at least one diagnosis and the prediction of the at least one diagnosis (Vianu et al., [0061] These measures of uncertainty will be based on quantitative assessments of the computer-implemented algorithm's performance in training and validation datasets. The measures of uncertainty may also incorporate measures of the underlying variability in accuracy of the training and validation datasets themselves.).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 19
As per claim 19, Putha et al. and Vianu et al.
teach the method of claim 8 and further disclose the limitations of:
wherein the performance is indicated by a deviation between the validated label of the at least one diagnosis and the prediction of the at least one diagnosis (Vianu et al., [0061] These measures of uncertainty will be based on quantitative assessments of the computer-implemented algorithm's performance in training and validation datasets. The measures of uncertainty may also incorporate measures of the underlying variability in accuracy of the training and validation datasets themselves.).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 20
As per claim 20, Putha et al. and Vianu et al.
teach the method of claim 7 and further disclose the limitations of:
wherein the validated radiology report includes a free-text report, and wherein said parsing of the validated radiology report includes applying at least one language agnostic and context aware text mining method to the validated radiology report, or applying at least one language-specific text mining method to the validated radiology report (Vianu et al., [0079] free-text notes).
The obviousness of combining the teachings of Vianu et al. with the method as taught by Putha et al. is discussed in the rejection of claim 1, and incorporated herein.
CLAIM 12
As per claim 12, claim 12 is directed to an apparatus. Claim 12 recites the same or similar limitations as those addressed above for claims 1-11 and 16-20. Claim 12 is therefore rejected for the same reasons set forth above for claims 1-11 and 16-20.
CLAIM 13
As per claim 13, claim 13 is directed to an apparatus. Claim 13 recites the same or similar limitations as those addressed above for claims 1-11 and 16-20. Claim 13 is therefore rejected for the same reasons set forth above for claims 1-11 and 16-20.
CLAIM 14
As per claim 14, claim 14 is directed to an apparatus. Claim 14 recites the same or similar limitations as those addressed above for claims 1-11 and 16-20. Claim 14 is therefore rejected for the same reasons set forth above for claims 1-11 and 16-20.
CLAIM 15
As per claim 15, claim 15 is directed to an apparatus. Claim 15 recites the same or similar limitations as those addressed above for claims 1-11 and 16-20. Claim 15 is therefore rejected for the same reasons set forth above for claims 1-11 and 16-20.
Prior Art
Prior art made of record though not relied upon in the present basis of rejection are noted in the attached PTO-892 and include:
Vianu et al. 416 (US 2020/0334416) disclose computer-implemented machine learning systems that are programmed to analyze digital image data alone or in combination with unstructured text, and more specifically pertains to methods for natural language understanding of radiology reports.
Vianu et al. 809 (US 2020/0334809) disclose computer-implemented machine learning systems and methods that are programmed to classify digital image data alone or in combination with unstructured text data, and more specifically pertains to machine learning systems and methods for diagnostic error detection.
Huang et al. 2020 (Reference U) have developed and compared different multimodal fusion model architectures that are capable of utilizing both pixel data from volumetric Computed Tomography Pulmonary Angiography scans and clinical patient data from the EMR to automatically classify Pulmonary Embolism (PE) cases. The best performing multimodality model is a late fusion model that achieves an AUROC of 0.947 [95% CI: 0.946–0.948] on the entire held-out test set, outperforming imaging-only and EMR-only single modality models.
Provenzano et al. 2021 (Reference V) disclose a review that summarized literature that compared predictive algorithms to radiologists to identify potential barriers to reproducibility and implementation of AI research. This study concluded that standardized metrics and benchmarks for development and reporting of ML algorithms in oncologic imaging are urgently needed..
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES P. COLEMAN whose telephone number is (571) 270-7788. The examiner can normally be reached on Monday through Thursday 7:30a-5:00p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT W. MORGAN can be reached on (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C. P. C./
Examiner, Art Unit 3683
/ROBERT W MORGAN/Supervisory Patent Examiner, Art Unit 3683