Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 19, 2026 has been entered.
Status of Claims
This office action for the 18/429433 application is in response to the communications filed February 19, 2026.
Claims 1-11 were amended February 19, 2026.
Claim 12 was cancelled February 19, 2026.
Claims 1-11 are currently pending and considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
As per claim 1,
Step 1: The claim recites subject matter within a statutory category as a machine.
Step 2A is a two-prong inquiry, in which Prong 1 determines whether a claim recites a judicial exception. Prong 2 determines if the additional limitations of the claim integrates the recited judicial exception into a practical application. If the additional elements of the claim fail to integrate the judicial exception into a practical application, claim is directed to the recited judicial exception, see MPEP 2106.04(II)(A).
Step 2A Prong 1: The claim contains subject matter that recites an abstract idea, with the steps of acquire a plurality of verification data sets including medical data, result data, and true/false data regarding the result data the medical data, including CT images, X-ray images, MR images, ultrasound images, or scintigrams; and identify a target verification data set suitable for evaluating a performance required for a trained model that outputs the result data in response to input of the medical data among the plurality of verification data sets on the basis of a relationship between a first trained model that outputs the result data in response to input of the medical data and the plurality of verification data sets, input target medical data which is the medical data included in the target verification data set to output the result data in response to input of the medical data; calculate a first degree of matching which is a degree of matching between target true/false data and first result data, the target true/false data being the true/false data included in the plurality of verification data sets and the first result data being the result data output by the first trained model in response to input of the target medical data; calculate a second degree of matching which is a degree of matching between target true/false data and second result data, the second result data being the result data output by the second trained model in response to input of the target medical data; and determine that the performance of the second trained model is higher than that of the first trained model when the second degree of matching is higher than the first degree of matching; identify the verification data set including the result data of the positive result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true positive of the second trained model; identify the verification data set including the result data of the negative result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true negative of the second trained model; identify the verification data set including the result data of the positive result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false positive of the second trained model; and identify the verification data set including the result data of the negative result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false negative of the second trained model. These steps, as drafted, under the broadest reasonable interpretation recite:
certain methods of organizing human activity (e.g., fundamental economic principles or practices including: hedging; insurance; mitigating risk; etc., commercial or legal interactions including: agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations; etc., managing personal behavior or relationships or interactions between people including: social activities; teaching; following rules or instructions; etc.) but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from being directed to certain methods of organizing human activity. The identified abstract idea, law of nature, or natural phenomenon identified above, in the context of this claim, encompasses a certain method of organizing human activity, namely managing personal behavior or relationships or interactions between people. This is because the limitations of the abstract idea recite a list of rules or instructions that a human person can follow in the course of their personal behavior. If a claim limitation, under its broadest reasonable interpretation, covers at least the recited methods of organizing human activity above, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. See MPEP 2106.04(a).
Step 2A Prong 2: The claim does not recite additional elements that integrate the judicial exception into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which:
amount to mere instructions to apply an exception, see MPEP 2106.05(f), such as:
“A medical information processing system comprising a modality including an X-ray computed tomography device, an X-ray diagnostic device, a magnetic resonance imaging device, an ultrasound diagnostic device, or a nuclear medical diagnostic device; and a medical information processing device including an input interface configured to be operable by a user, and processing circuitry, wherein the processing circuitry is configured to”, “obtained by inputting the medical data to a trained model” and “into the first trained model and a second trained model, the second trained model being a trained model different from the first trained model and being a machine learning model trained” which corresponds to merely using a computer as a tool to perform an abstract idea. Page 3 Lines 12-16 describes that the hardware that implements the steps of the abstract idea amounts to a generic computer. Implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer.
add insignificant extra-solution activity to the abstract idea, see MPEP 2106.05(g), such as:
“being data generated by the modality” and “from the user via the input interface” which corresponds to mere data gathering and/or output.
Accordingly, this claim is directed to an abstract idea.
Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and/or generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, identified as insignificant extra-solution activity to the abstract idea, amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields such as:
computer functions that have been identified by the courts as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity, see MPEP 2106.05(d)(II), such as:
“being data generated by the modality” and “from the user via the input interface” which corresponds to receiving or transmitting data over a network.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 2,
Claim 2 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 2 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“evaluates a performance of a second trained model on the basis of a relationship between the first trained model and the target verification data set and a relationship between the second trained model and the verification data set,” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
“wherein the processing circuitry” and “wherein the second trained model is a trained model different from the first trained model and is a machine learning model trained to output the result data in response to input of the medical data.” further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 3,
Claim 3 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 3 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the result data is data regarding a diagnosis result, and the performance required of the trained model includes a low false positive false detection rate of the diagnostic result.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 4,
Claim 4 depends from claim 3 and inherits all the limitations of the claim from which it depends. Claim 4 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“identifies the target verification data set including the verification data set in which the result data is positive and the true/false data is negative.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
“wherein the processing circuitry” further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 5,
Claim 5 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 5 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the result data is data regarding a diagnosis result, and the performance required of the trained model includes a low false negative false detection rate of the diagnostic result.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 6,
Claim 6 depends from claim 5 and inherits all the limitations of the claim from which it depends. Claim 6 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“identifies the target verification data set including the verification data set in which the result data is negative and the true/false data is positive.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
“wherein the processing circuitry” further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 7,
Claim 7 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 7 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the true/false data includes data based on a finding report.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 8,
Claim 8 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 8 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the medical data is medical image data.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 9,
Claim 9 is substantially similar to claim 1. Accordingly, claim 9 is rejected for the same reasons as claim 1.
As per claim 10,
Claim 10 is substantially similar to claim 1. Accordingly, claim 10 is rejected for the same reasons as claim 1.
As per claim 11,
Claim 11 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 11 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the medical data includes medical image data of a target patient, wherein the result data includes either a positive result indicating that the trained model detected a specific disease in the target patient or a negative result indicating that the trained model did not detect the specific disease in the target patient, and wherein the true/false data includes the positive result or the negative result for the specific disease determined based on a finding report of a doctor.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Tahmasebi Maraghoosh et al. (US 2020/0233979; herein referred to as Tahmasebi Maraghoosh) in view of Remiszewski et al. (US 2016/0110584; herein referred to as Remiszewski).
As per claim 1,
Tahmasebi Maraghoosh teaches a medical information processing device comprising processing circuitry:
(Paragraphs [0007] and [0008] of Tahmasebi Maraghoosh. The teaching describes that a neural network may be trained to receive segmented image data, e.g., indicative of regions of interests for potential cancer concern, and to classify one or more of those regions of interest as malignant or benign. However, the segmented image data provided by the prior layer may be useful even without being used for classification, for instance, to annotate regions of interest for potential cancer concern in a digital image. Generally, in one aspect, a method may be implemented using one or more processors and may include: providing a digital key that is associated with a particular entity, wherein the particular entity has access to a machine learning model that is trained to generate one or more outputs based on data applied across a plurality of inputs)
Tahmasebi Maraghoosh further teaches acquire a plurality of verification data sets including medical data, result data obtained by inputting the medical data to a trained model, and true/false data regarding the result data:
(Paragraphs [0055]-[0057], [0084] and [0085] of Tahmasebi Maraghoosh. The teaching describes an input data which may be received/obtained/retrieved from a variety of sources 444. These sources may include, but are not limited to, image data 444 1 obtained from medical imaging devices such as X-rays, CT scans, Mills, EKG, etc., imaging protocol data 444 2 (e.g., digital imaging and communications in medicine, or “DICOM,” picture archiving and communication systems, or “PACS,” etc.), demographic data 444 3, and medical history data 444 4 (e.g., obtained from EHRs). Before or during input stage 438, an encryption key 446 may be provided, e.g., by AI provider system 100 to one or more remote computing systems 102 (see FIG. 1). This encryption key 446 may be used by one or more users (114 in FIG. 1) to generate, from data provided by sources 444, encrypted data 448. In some embodiments, a unique private digital key 426 (which may be similar to digital key 226) may be used at block 450 to decrypt the decrypted data 448, e.g., so that the decrypted data can then be applied as input across an unencrypted version of FFNN 420 (as shown at 451). At block 704, the system may cause the digital key to be applied as input across at least a portion of a trained machine learning model to generate one or more verification outputs. At block 706, the system may compare one or more of the verification outputs to one or more known verification outputs. In various embodiments, the one or more known verification outputs may have been generated based on prior application of the digital key as input across at least the same portion of the trained machine learning model. Intuitively, if a ML model remains unaltered, then applying the same data across the same portion of the ML model at different times should yield the same output.)
Tahmasebi Maraghoosh further teaches a modality including an X-ray computed tomography device, an X-ray diagnostic device, a magnetic resonance imaging device, an ultrasound diagnostic device, or a nuclear medical diagnostic device; and a medical information processing device including an input interface configured to be operable by a user and the medical data being data generated by the modality, including CT images, X-ray images, MR images, ultrasound images, or scintigrams:
(Paragraphs [0040] and [0055] of Tahmasebi Maraghoosh. The teaching describes input stage 438, input data may be received/obtained/retrieved from a variety of sources 444. These sources may include, but are not limited to, image data 444 1 obtained from medical imaging devices such as X-rays, CT scans, Mills, EKG, etc., imaging protocol data 444 2 (e.g., digital imaging and communications in medicine, or “DICOM,” picture archiving and communication systems, or “PACS,” etc.), demographic data 444 3, and medical history data 444 4 (e.g., obtained from EHRs). Other sources of input data are also contemplated herein. When a particular user 114 (e.g., a nurse) operates a client device 112 to interact with the software application, the nurse may log into the client device 112 with one or more credentials. These credentials may authenticate the nurse to utilize the software application to apply data across one or more ML models. The nurse may not be made explicitly aware that he or she will be accessing a ML model. Rather, the nurse may simply interact with a graphical user interface (“GUI”) or other input component to see some patient data that is generated by a CDS algorithm in response to various other data.)
Tahmasebi Maraghoosh further teaches identify a target verification data set suitable for evaluating a performance required for a trained model that outputs the result data in response to input of the medical data among the plurality of verification data sets on the basis of a relationship between a first trained model that outputs the result data in response to input of the medical data and the verification data sets.:
(Paragraphs [0086]-[0088] of Tahmasebi Maraghoosh. The teaching describes that the system may determine an outcome of the comparing at block 706. If the answer at block 708 is that there is a match, then at block 710, the system may determine that one or more parameters of the trained machine learning model have been compromised. For example, the verification output generated at block 704 may not precisely match the known verification outputs. One possible cause is that one or more parameters of the ML model have been tampered with, resulting in the disparity between the verification outputs generated at block 704 and the known verification outputs. Back at block 708, if the answer is no, then at block 714, the system may determine that the trained ML model remains uncompromised. In some embodiments, no further action may be taken. In other embodiments, the successful integrity check may be logged, e.g., so that future investigators are able to determine that, at least at one point in time, the ML model was not compromised. This may help them determine when the ML model later become compromised, should that occur.)
Tahmasebi Maraghoosh further teaches input target medical data which is the medical data included in the target verification data set, into the first trained model and a second trained model, the second trained model being a trained model different from the first trained model and being a machine learning model trained to output the result data in response to input of the medical data; calculate a first degree of matching which is a degree of matching between target true/false data and first result data, the target true/false data being the true/false data included in the verification data sets and the first result data being the result data output by the first trained model in response to input of the target medical data; calculate a second degree of matching which is a degree of matching between target true/false data and second result data, the second result data being the result data output by the second trained model in response to input of the target medical data; and determine that the performance of the second trained model is higher than that of the first trained model when the second degree of matching is higher than the first degree of matching:
(Paragraphs [0082]-[0088] and Figure 7 of Tahmasebi Maraghoosh. The teaching describes the validation steps of machine learning models to determine that any given machine learning model is uncompromised. The determination of uncompromised and level of compromise is being construed as a degree of matching between a target true/false data set and first or second result data set. When a first machine learning model has a low degree of match, one or more parameters in the first machine learning model is determined and indicated. This first model is then retrained and then retested for match. When a second machine learning model has a high degree of match, the model is determined to be uncompromised. This not only establishes that the first and second models are trained differently, but also establishes that the second machine learning model has a higher degree of match that the first.)
Tahmasebi Maraghoosh does not explicitly teach wherein the processing circuitry is further configured to: identify the verification data set including the result data of the positive result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true positive of the trained model from the user via the input interface; identify the verification data set including the result data of the negative result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true negative of the trained model from the user via the input interface; identify the verification data set including the result data of the positive result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false positive of the trained model from the user via the input interface; and identify the verification data set including the result data of the negative result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false negative of the trained model from the user via the input interface.
However, Remiszewski teaches wherein the processing circuitry is further configured to: identify the verification data set including the result data of the positive result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true positive of the trained model from the user via the input interface; identify the verification data set including the result data of the negative result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of true negative of the trained model from the user via the input interface; identify the verification data set including the result data of the positive result and the true/false data of the negative result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false positive of the trained model from the user via the input interface; and identify the verification data set including the result data of the negative result and the true/false data of the positive result from among the plurality of verification data sets as the target verification data set in response to request to increase a detection rate of false negative of the trained model from the user via the input interface:
(Paragraphs [0019] and [0141]-[0148] and Figure 3 of Remiszewski. The teaching describes that the present invention, models, algorithms, annotations, training and/or test sets may be labeled and/or branded with the validation source to provide additional weight to validity of outcomes and claims and to direct the use of the output, for example the output confidence level in the accuracy of a classification or a collection of images. The annotation data may enhance value and confidence, as representing key opinion leaders by name or by institution association. In this way, the combination of visual/spatial representation of reference data and name association of the label source or sample source used for annotation is able to provide quantitative and objective verification of the source of predictions and class associations made. In an aspect, the system may create an annotation region for the region of interest pixels, and assign the annotation region a new class based upon the difference analysis. The method of FIG. 3 may include determining a true positive region of interest or true negative region of interest 122. For example, the system may identify pixels of the comparison image that include a true positive region of interest or a true negative region of interest. A true positive region may include, for example, a region of the comparison image where a true image indicates that a class of cancer is present in the true image (e.g., a medical professional annotated the true image with the class of cancer), and where the spectra from the prediction image indicate that a class of cancer is present in the prediction image. A true negative may include, for example, a region of the comparison image where a true image of the biological sample indicates that a class of cancer is not present in the true image (e.g., a medical professional annotated the true image to indicate a class of cancer is not present in the true image), and where the spectra from the prediction image indicates that a class of cancer is not present in the prediction image. The method may also include determining any false positive region of interest and any false negative region of interest 124. In an aspect, the system may identify pixels of the comparison image that include a false positive region of interest or a false negative region of interest. A false positive region of interest may include, for example, a region in the comparison image where the true image indicates that a class of cancer is not present in the true image and the spectra from the prediction image indicates that the class of cancer is present in the prediction image. A false negative region of interest may include, for example, a region in the comparison image where the true image indicates that a class of cancer is present in the true image and the spectra from the prediction image indicates that the class of cancer is not present in the prediction image. The method may further include performing a difference analysis between the true image and the prediction image 118.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the image-based machine learning teachings of Tahmasebi Maraghoosh, the image-based machine learning teachings of Remiszewski. Paragraph [0019] of Remiszewski teaches that the machine learning methods, particularly relating to annotation, enhances the value and confidence of the models used to classify images. One of ordinary skill in the art would have added to the teaching of Tahmasebi Maraghoosh, the teaching of Remiszewski based on this incentive without yielding unexpected results.
As per claim 2,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the processing circuitry evaluates a performance of a second trained model on the basis of a relationship between the first trained model and the target verification data set and a relationship between the second trained model and the verification data set, wherein the second trained model is a trained model different from the first trained model and is a machine learning model trained to output the result data in response to input of the medical data:
(Paragraph [0038] of Tahmasebi Maraghoosh. The teaching describes an Integrity engine 108 which may be configured to examine various aspects of ML models stored locally to AI provider system 100 (e.g., in database 104) and/or remotely, e.g., in database 116, to determine whether and/or how those ML models may have been compromised. For example, a malicious party may gain access to a ML model stored in database 116 and may alter one or more aspects of the ML model, such as altering or deleting one or more parameters or weights in various layers. Alternatively, a licensed entity may attempt to make changes to its locally stored model when it is not licensed to do so. In either case, integrity engine 108 may be configured to apply various techniques described herein, or cause these techniques to be applied at one or more remote computing systems 102, in order to verify the integrity of a ML model and/or to take appropriate remedial action when it determines that a ML model has been compromised. In some embodiments, integrity engine 108 may verify the integrity of a ML model by applying a digital key as input across the ML model to generate output, which is then verified by integrity engine 108 as described herein.)
As per claim 3,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the result data is data regarding a diagnosis result, and the performance required of the trained model includes a low false positive false detection rate of the diagnostic result:
(Paragraphs [0086]-[0088] and [0103] of Tahmasebi Maraghoosh. The teaching describes that the system may determine an outcome of the comparing at block 706. If the answer at block 708 is that there is a match, then at block 710, the system may determine that one or more parameters of the trained machine learning model have been compromised. For example, the verification output generated at block 704 may not precisely match the known verification outputs. One possible cause is that one or more parameters of the ML model have been tampered with, resulting in the disparity between the verification outputs generated at block 704 and the known verification outputs. Back at block 708, if the answer is no, then at block 714, the system may determine that the trained ML model remains uncompromised. In some embodiments, no further action may be taken. In other embodiments, the successful integrity check may be logged, e.g., so that future investigators are able to determine that, at least at one point in time, the ML model was not compromised. This may help them determine when the ML model later become compromised, should that occur. Output of a first trained model may be used as part of a first CDS algorithm to make one diagnosis. This means that when the output and verification data does not exactly match, there is either a positive in the output and a negative in the verification or vice versa to establish authenticity in the ML outputs.)
As per claim 4,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 3.
Tahmasebi Maraghoosh further teaches wherein the processing circuitry identifies the target verification data set including the verification data set in which the result data is positive and the true/false data is negative:
(Paragraphs [0086]-[0088] and [0103] of Tahmasebi Maraghoosh. The teaching describes that the system may determine an outcome of the comparing at block 706. If the answer at block 708 is that there is a match, then at block 710, the system may determine that one or more parameters of the trained machine learning model have been compromised. For example, the verification output generated at block 704 may not precisely match the known verification outputs. One possible cause is that one or more parameters of the ML model have been tampered with, resulting in the disparity between the verification outputs generated at block 704 and the known verification outputs. Back at block 708, if the answer is no, then at block 714, the system may determine that the trained ML model remains uncompromised. In some embodiments, no further action may be taken. In other embodiments, the successful integrity check may be logged, e.g., so that future investigators are able to determine that, at least at one point in time, the ML model was not compromised. This may help them determine when the ML model later become compromised, should that occur. Output of a first trained model may be used as part of a first CDS algorithm to make one diagnosis. This means that when the output and verification data does not exactly match, there is either a positive in the output and a negative in the verification or vice versa to establish authenticity in the ML outputs.)
As per claim 5,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the result data is data regarding a diagnosis result, and the performance required of the trained model includes a low false negative false detection rate of the diagnostic result:
(Paragraphs [0086]-[0088] and [0103] of Tahmasebi Maraghoosh. The teaching describes that the system may determine an outcome of the comparing at block 706. If the answer at block 708 is that there is a match, then at block 710, the system may determine that one or more parameters of the trained machine learning model have been compromised. For example, the verification output generated at block 704 may not precisely match the known verification outputs. One possible cause is that one or more parameters of the ML model have been tampered with, resulting in the disparity between the verification outputs generated at block 704 and the known verification outputs. Back at block 708, if the answer is no, then at block 714, the system may determine that the trained ML model remains uncompromised. In some embodiments, no further action may be taken. In other embodiments, the successful integrity check may be logged, e.g., so that future investigators are able to determine that, at least at one point in time, the ML model was not compromised. This may help them determine when the ML model later become compromised, should that occur. Output of a first trained model may be used as part of a first CDS algorithm to make one diagnosis. This means that when the output and verification data does not exactly match, there is either a positive in the output and a negative in the verification or vice versa to establish authenticity in the ML outputs.)
As per claim 6,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 5.
Tahmasebi Maraghoosh further teaches wherein the processing circuitry identifies the target verification data set including the verification data set in which the result data is negative and the true/false data is positive:
(Paragraphs [0086]-[0088] and [0103] of Tahmasebi Maraghoosh. The teaching describes that the system may determine an outcome of the comparing at block 706. If the answer at block 708 is that there is a match, then at block 710, the system may determine that one or more parameters of the trained machine learning model have been compromised. For example, the verification output generated at block 704 may not precisely match the known verification outputs. One possible cause is that one or more parameters of the ML model have been tampered with, resulting in the disparity between the verification outputs generated at block 704 and the known verification outputs. Back at block 708, if the answer is no, then at block 714, the system may determine that the trained ML model remains uncompromised. In some embodiments, no further action may be taken. In other embodiments, the successful integrity check may be logged, e.g., so that future investigators are able to determine that, at least at one point in time, the ML model was not compromised. This may help them determine when the ML model later become compromised, should that occur. Output of a first trained model may be used as part of a first CDS algorithm to make one diagnosis. This means that when the output and verification data does not exactly match, there is either a positive in the output and a negative in the verification or vice versa to establish authenticity in the ML outputs.)
As per claim 7,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the true/false data includes data based on a finding report:
(Paragraphs [0049], [0055]-[0057], [0084] and [0085] of Tahmasebi Maraghoosh. The teaching describes an input data which may be received/obtained/retrieved from a variety of sources 444. These sources may include, but are not limited to, image data 444 1 obtained from medical imaging devices such as X-rays, CT scans, Mills, EKG, etc., imaging protocol data 444 2 (e.g., digital imaging and communications in medicine, or “DICOM,” picture archiving and communication systems, or “PACS,” etc.), demographic data 444 3, and medical history data 444 4 (e.g., obtained from EHRs). Before or during input stage 438, an encryption key 446 may be provided, e.g., by AI provider system 100 to one or more remote computing systems 102 (see FIG. 1). This encryption key 446 may be used by one or more users (114 in FIG. 1) to generate, from data provided by sources 444, encrypted data 448. In some embodiments, a unique private digital key 426 (which may be similar to digital key 226) may be used at block 450 to decrypt the decrypted data 448, e.g., so that the decrypted data can then be applied as input across an unencrypted version of FFNN 420 (as shown at 451). At block 704, the system may cause the digital key to be applied as input across at least a portion of a trained machine learning model to generate one or more verification outputs. At block 706, the system may compare one or more of the verification outputs to one or more known verification outputs. In various embodiments, the one or more known verification outputs may have been generated based on prior application of the digital key as input across at least the same portion of the trained machine learning model. Intuitively, if a ML model remains unaltered, then applying the same data across the same portion of the ML model at different times should yield the same output. Consequently, in some embodiments, if subsequent verification output 228 is generated that does not match known previously-generated verification outputs 230, that may indicate that FFNN 220 has been compromised. This means that the known verification output is based on previous findings of the machine learning model. This is construed as a findings report.)
As per claim 8,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the medical data is medical image data:
(Paragraphs [0055]-[0057], [0084] and [0085] of Tahmasebi Maraghoosh. The teaching describes an input data which may be received/obtained/retrieved from a variety of sources 444. These sources may include, but are not limited to, image data 444 1 obtained from medical imaging devices such as X-rays, CT scans, Mills, EKG, etc., imaging protocol data 444 2 (e.g., digital imaging and communications in medicine, or “DICOM,” picture archiving and communication systems, or “PACS,” etc.), demographic data 444 3, and medical history data 444 4 (e.g., obtained from EHRs). Before or during input stage 438, an encryption key 446 may be provided, e.g., by AI provider system 100 to one or more remote computing systems 102 (see FIG. 1). This encryption key 446 may be used by one or more users (114 in FIG. 1) to generate, from data provided by sources 444, encrypted data 448. In some embodiments, a unique private digital key 426 (which may be similar to digital key 226) may be used at block 450 to decrypt the decrypted data 448, e.g., so that the decrypted data can then be applied as input across an unencrypted version of FFNN 420 (as shown at 451). At block 704, the system may cause the digital key to be applied as input across at least a portion of a trained machine learning model to generate one or more verification outputs. At block 706, the system may compare one or more of the verification outputs to one or more known verification outputs. In various embodiments, the one or more known verification outputs may have been generated based on prior application of the digital key as input across at least the same portion of the trained machine learning model. Intuitively, if a ML model remains unaltered, then applying the same data across the same portion of the ML model at different times should yield the same output.)
As per claim 9,
Claim 9 is substantially similar to claim 1. Accordingly, claim 9 is rejected for the same reasons as claim 1.
As per claim 10,
Claim 10 is substantially similar to claim 1. Accordingly, claim 10 is rejected for the same reasons as claim 1.
As per claim 11,
The combined teaching of Tahmasebi Maraghoosh and Remiszewski teaches the limitations of claim 1.
Tahmasebi Maraghoosh further teaches wherein the medical data includes medical image data of a target patient, wherein the result data includes either a positive result indicating that the trained model detected a specific disease in the target patient or a negative result indicating that the trained model did not detect the specific disease in the target patient, and wherein the true/false data includes the positive result or the negative result for the specific disease determined based on a finding report of a doctor:
(Paragraphs [0075] and [0076] of Tahmasebi Maraghoosh. The teaching describes that FFNN 620 is a convolutional neural network that receives, as input, a digital image of a patient. A user with limited permissions such as a nurse or a researcher using FFNN 620 to analyze image data in an anonymous manner may provide a digital key 626 that unlocks only those portions of the convolutional neural network (sometimes referred to as “image patches”) that do not depict a patient's face. Higher level users, such as doctors caring for the patients depicted in the input data, may provide digital keys 626 that unlock other portions of the input images, such as portions depicting the patients' faces. In some embodiments, a similar process may also be performed at the output level, where, for instance, the digital keys 626 may unlock a desired level of output. For example, a nurse, researcher, or doctor classifying an image using FFNN 620 may receive output that provides a decision support appropriate for their level of expertise. A nurse or researcher may have a global output such as an indication that the patient has suspicious lung nodules. By contrast, a doctor treating the patient might receive the location(s) and risk(s) of malignancy of the individual nodules, i.e. more granular output.)
Response to Arguments
Applicant's arguments filed February 19, 2026 have been fully considered.
Applicant’s arguments pertaining to rejections made under 35 U.S.C. 101 are not persuasive.
The Applicant argues that execution of trained machine learning models on medical data and calculation of matching values cannot practically be performed in the human mind and do not constitute rules governing personal behavior.
The Examiner respectfully disagrees. The Examiner has not characterized the abstract idea as a mental process. Accordingly, arguments against that position are irrelevant as they argue against a position that the Examiner has not taken. Additionally, calculation of matching values are certainly rules that can govern personal behavior. A human person is more than capable of performing such a calculation in the course of their personal behavior. The element of the trained machine learning models is merely applying the abstract idea to a computer.
The Applicant further argues that the pending claims do not merely test two models and select the more accurate one. Rather the system selects a data set to increase a detection rate based on defined true positive, true negative, false positive, or false negative combinations before performing the comparative evaluation.
The Examiner respectfully disagrees. The selection of a data set is merely the addition of information before the comparative evaluation and is therefore considered as part of the abstract idea.
The Applicant further argues that the pending claims integrate the claimed subject matter into a practical application. They improve the technical field of machine learning-based medical diagnostic systems by enabling controlled evaluation of updated trained models in a medical environment.
The Examiner respectfully disagrees. The Applicant has failed to identify a specific problem with technology that is being addressed by a solution of the pending claims. The ability to control evaluation of updated trained models does nothing beyond using trained models in their ordinary capacity. The application to the medical environment does not change this because the relevant technology is machine learning models, not their application in medicine. There is no evidence to support the assertion that machine learning models are being improved by the pending claims.
Applicant’s arguments pertaining to rejections made under 35 U.S.C. 102 are not persuasive.
The Applicant argues that Remiszewski fails to teach selecting a target verification dataset in response to a user request to increase a specified detection rate because Remiszewski merely identifies classification outputs after prediction and does not disclose or suggest structured dataset selection logic tied to detection optimization rate optimization.
The Examiner respectfully disagrees. Looking to Figure 3 of Remiszewski, we can see that the Difference Analysis determines image ROIs based on the True Positive, True Negative, False Positive and False Negative datasets selected by the user. These datasets provide an ROI selection for the predicted image with confidence values which increases the optimization of accuracy for the predicted image.
The Applicant further argues that the rationale for combining Tahmasebi Maraghoosh and Remiszewski does not provide a teaching or motivation to combine.
The Examiner respectfully disagrees. Paragraph [0019] of Remiszewski teaches that the machine learning methods, particularly relating to annotation, enhances the value and confidence of the models used to classify images. This is a direct teaching from Remiszewski which directly incentivizes the improvement of ML models like that seen in Tahmasebi Maraghoosh.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD A NEWTON whose telephone number is (313)446-6604. The examiner can normally be reached M-F 8:00AM-4:00PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H. CHOI can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAD A NEWTON/Primary Examiner, Art Unit 3681