Prosecution Insights
Last updated: April 19, 2026
Application No. 17/802,782

TECHNIQUE FOR DETERMINING AN INDICATION OF A MEDICAL CONDITION

Non-Final OA §101§102§103§112
Filed
Aug 26, 2022
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Deepc GmbH
OA Round
4 (Non-Final)
36%
Grant Probability
At Risk
4-5
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. Status of the Claims The status of the claims as of the response filed 12/10/2025 is as follows: Claims 1-15, 23, and 31-32 are cancelled, and all previously given rejections for these claims are considered moot. Claims 16 and 27 are currently amended. Claims 17-22, 24-26, and 28-30 are as previously presented. Claims 33-36 are new. Claims 16-22, 24-30, and 33-36 are currently pending in the application and have been considered below. Response to Amendment Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections are upheld. Rejection Under 35 USC 102/103 The amendments made to the claims introduce new limitations that are not fully addressed in the previous Office action, and thus the corresponding 35 USC 102/103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 101 On pages 11-12 of the response filed 12/10/2025 Applicant argues that “claims 16 and 27 are directed to specific, computer-implemented techniques for improving the performance of a diagnostic system, and are not ‘abstract ideas.’” Applicant summarizes claim 16 as reciting “the automatic control of program flow of a computer based on model ‘suitability’, which is not abstract,” and claim 27 as “compar[ing] an anatomical atlas image to a number of training images to correlate the training images” which “a human could not perform reliably or objectively for a large number of training images.” Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that the functions of selecting at least two pre-trained models based on a property of medical data, making determinations about the suitability of the models based on a property of the medical data, and either outputting a notification responsive to no suitable models or executing the models to perform diagnosis responsive to suitable models is not an inherently technical process and can be achieved by a human actor such as a clinician managing their personal behavior and/or interactions with others. For example, a clinician could evaluate medical data in a patient test instance (e.g. a diagnostic image) for a property (e.g. imaging modality) and select appropriate diagnostic models (e.g. those that have previously been fitted or trained using a learning algorithm) based on the property (e.g. choose two or more models that are specific to MRI data when the imaging modality is MRI). The clinician could then either (1) look at the images and determine that a reliable determination of a diagnostic output is impossible (e.g. by observing that the images are of such poor quality that no useful diagnosis can be made with the selected diagnostic models) and notify a colleague that diagnosis is impossible; or (2) determine that a reliable diagnosis can be made and use the selected models to determine the indication of a medical condition (e.g. cancerous tumor) based on evaluating the model-specific outputs. Thus, Examiner maintains that claim 16 recites an abstract idea. Similarly, Examiner maintains that the functions of correlating parts of a medical image with parts of a reference image and comparing an image value from the medical image to information from a correlated part of the reference image obtained by matching training images to a base image from an atlas and determining image values relating to a statistical distribution function so that an output of a diagnostic notification may be triggered can be achieved by a human actor such as a clinician managing their personal behavior and/or interactions with others. For example, a clinician could evaluate medical images to correlate test information with reference images from an atlas by comparing image values to statistical distributions in example cases to determine an indication of a medical condition (e.g. determining that a mass with certain darkness values in a certain quadrant of an MRI scan indicates a cancerous tumor as indicated by known statistical distributions in of darkness values in reference cases) and notify a colleague or other person of the diagnostic indication. Thus, Examiner maintains that claim 27 recites an abstract idea. On pages 12-13 of the response Applicant argues that claim 16 “concerns a practical application because, among other things, the alternative ‘or’ structure… ensures that if no suitable models are found, no diagnosis is performed, and the user is informed.” Examiner further asserts that “the computer-implemented method observes computational efficiency because unsuitable models are not executed” and “the claimed approach also improves the quality of the diagnostic indication provided to an end user because unsuitable models cannot be used, thus meaning that unreliable diagnoses are not indicated to a user.” Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that under the BRI explained below in para. 13, the steps for preventing use of unsuitable models are not required to be performed, and thus do not provide a practical application. Further, even if such steps were required to be performed, preventing use of unsuitable diagnostic models is not a technical improvement to a technical field, and instead reflects part of an abstract diagnostic workflow. For example, a clinician might have 10 known diagnostic models or algorithms available to them, and make a determination that none are suitable for the specific patient under review based on characteristics of the patient and/or availability of certain types of data. Because the alleged improvements to efficiency and quality are part of the abstract idea itself, they cannot provide “significantly more” than the abstract idea and thus do not confer eligibility (see MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” See also 2106.05(a)(II): “it is important to keep in mind that an improvement in the abstract idea itself… is not an improvement in technology.”). On page 13 of the response, Applicant argues that “new dependent claim 33 defines that the indication of the medication condition is a medical diagnosis, clearly a practical application of the claimed invention.” Applicant’s arguments are fully considered, but are not persuasive. Outputting a medical diagnosis using selected diagnostic models is not a practical application, and instead reflects part of the abstract idea itself; for example, a clinician could select and utilize diagnostic models or algorithms to output a medical diagnosis as part of an interaction with a patient, such that this is considered part of a certain method of organizing human activity. On page 13 of the response Applicant argues that “claim 27 also recites a practical application because the use of a base image being ‘an atlas image generated based on a further plurality of images’ improves regional specificity of a large number of training images” which allows for improved diagnostic accuracy. Applicant’s argument are fully considered, but are not persuasive. Examiner maintains that a clinician would be capable of comparing values of a test image to correlated portions of a reference image with information gleaned from matching to an atlas image (as explained above), such that this subject matter is considered part of the abstract idea itself. Because the alleged improvements to diagnostic accuracy are part of the abstract idea itself, they cannot provide “significantly more” than the abstract idea and thus do not confer eligibility (see MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” See also 2106.05(a)(II): “it is important to keep in mind that an improvement in the abstract idea itself… is not an improvement in technology.”). On page 13 of the response Applicant appears to argue that claims 16 and 27 should be found patent eligible under Step 2B due to alleged deficiencies of the prior art references. Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that issues of patentability over the prior art are a separate consideration to the question of eligibility under 35 USC 101; MPEP 2106.05(I) states that: Although the courts often evaluate considerations such as the conventionality of an additional element in the eligibility analysis, the search for an inventive concept should not be confused with a novelty or non-obviousness determination. See Mayo, 566 U.S. at 91, 101 USPQ2d at 1973 (rejecting "the Government’s invitation to substitute §§ 102, 103, and 112 inquiries for the better established inquiry under § 101 "). As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter." Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1315, 120 USPQ2d 1353, 1358 (Fed. Cir. 2016) (quoting Diamond v. Diehr, 450 U.S. at 188–89, 209 USPQ at 9). Accordingly, whether the claims are found to be novel and/or non-obvious over the prior art has no bearing on analysis of patent eligibility under 35 USC 101. Further, Applicant has not provided any specific arguments with respect to any alleged deficiencies of the Step 2B analysis outlined in the previous Office action, nor any evidence that the claims contain additional elements that are not well-understood, routine, and conventional. For the reasons outlined above, the 35 USC 101 rejections are upheld for claims 16-22 and 24-30. Rejection Under 35 USC 102/103 On pages 6-7 of the response Applicant argues that “Lee does not disclose ‘determining […] a respective model-specific indication of the medical condition’ and ‘determining […] the indication of the medical condition’, ‘if it is determined that a reliable determination of the medical condition is possible’ as claimed.” Applicant further asserts that in Lee models may be assigned preference levels based on precision metrics of each model, which “leads a person of ordinary skill away from determining if a model is suitable, and only then using that model to obtain a model-specific indication.” Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that Lee sufficiently teaches use of selected models to determine model-specific indications of a medical condition “if it is determined that a reliable determination of the medical condition is possible.” Examiner notes that there is no description in the claim as presently drafted regarding how it is determined that a reliable determination of the medical condition is possible, such that the broadest reasonable interpretation of this limitation includes determining that a reliable determination is possible in any manner. Under this BRI, Examiner maintains that the identification and selection of at least one diagnostic model suitable for diagnosing the patient based on matching categorized data as in Fig. 9 & [0055] is functionally equivalent to determining that a reliable determination of the indication of the medical condition of each identified model is possible because a suitable model has been found. Accordingly, Examiner maintains that this aspect of claim 16 is anticipated by Lee. On pages 7-8 Applicant alleges various deficiencies of Lee and Nye in combination as well as with respect to “determining… that a reliable determination of the indication is impossible” limitation of claim 16. Applicant’s arguments are fully considered, but are moot because the broadest reasonable interpretation of claim 16 does not require performance of the limitation at issue, and as such the combination of Lee and Nye is not relied upon in the updated prior art rejections below. Per MPEP 2111.04(II): “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) are not met.” In the instant case, claim 16 requires either: (1) determining, using the at least one processor, that a reliable determination of the indication is impossible, if the at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models, thus triggering output of a notification on an output device, the notification informing a user that a reliable determination of the indication is impossible; or (2) wherein, if it is determined that a reliable determination of the indication of the medical condition is possible: determining, using the at least one processor, using each of the selected models, a respective model-specific indication of the medical condition based on the medical data; determining, using the at least one processor, based on the model-specific indications, the indication of the medical condition. Accordingly, the subject matter of option (1) is not required under the BRI of the claim if the subject matter of option (2) is performed. Examiner maintains that Lee anticipates the steps of option (2), such that any arguments directed to the subject matter of option (1) are moot. On page 8 of the response Applicant argues that “Lee does not find ‘a degree of suitability for each model of the plurality of models’” as in claim 19. Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that claim 19 does not recite finding or calculating “a degree of suitability” for each model as Applicant appears to assert. Claim 19 recites “The method of claim 16, wherein the step of selecting comprises comparing, using the at least one processor, the at least one property associated with the medical data of the test instance with at least one property associated with training data used for generating an individual model, for each individual model of the plurality of models.” Examiner maintains that Lee sufficiently discloses the subject matter at issue in claim 19; Fig. 9 and [0055] show that the model selection unit searches for and automatically selects the categorized diagnostic models suitable for diagnosing a new patient based on matching (i.e. comparing) extracted patient information and/or medical image data (i.e. at least one property associated with medical data of a test instance such as a diagnostic image) with categories of data used to train each category-specific model (e.g. as in [0053]). On pages 8-9 of the response Applicant alleges various deficiencies of Nachev with respect to the subject matter of claim 27. Applicant’s arguments are fully considered, but are moot because the new ground of rejection does not rely on Nachev for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 34 and 36 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 34 recites “wherein the model-specific indication is a score indicating whether an unseen test instance is in- or out-of-distribution compared to a distribution of the training data.” However, parent claims 16 and 21 refer to a model-specific indication, while claim 34 appears to introduce another model-specific indication of a medical condition that “may” be provided by at least one model. It is therefore unclear which specific instance of “model-specific indication” is being referenced by the limitation cited above, because “the model-specific indication” could conceivably be referencing one of the respective model-specific indications from claim 16, or the optional model-specific indication newly-introduced by claim 34, rendering the claim indefinite. For purposes of examination, Examiner will interpret “the model-specific indication” of claim 34 as referencing the optional model-specific indication previously introduced earlier in claim 34. Claim 36 recites “wherein at least one model in the plurality of models is a density model” and “the model-specific indication of the medical condition.” There is insufficient antecedent basis for these limitations because parent claim 27 makes no mention of “a plurality of models” or “a model-specific indication.” For purposes of examination, Examiner will interpret each of these elements as being newly introduced by claim 36. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16-22, 24-30, and 33-36 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 16-22, 24-27, and 33-36 are directed to methods (i.e. processes), claim 28 is directed to an apparatus (i.e. a machine), and claims 29-30 are directed to a non-transitory computer program product (i.e. a manufacture). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claim 16 recites steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 16 recites: A computer-implemented medical data processing method for determining an indication of a medical condition, the method comprising: selecting, using at least one processor, based on at least one property associated with medical data of a test instance, at least two models out of a plurality of models, wherein each of the plurality of models is generated by a learning algorithm and configured to provide a model-specific indication of the medical condition based on the medical data, and either: determining, using the at least one processor, that a reliable determination of the indication is impossible, if the at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models, thus triggering output of a notification on an output device, the notification informing a user that a reliable determination of the indication is impossible; or wherein, if it is determined that a reliable determination of the indication of the medical condition is possible: determining, using the at least one processor, using each of the selected models, a respective model-specific indication of the medical condition based on the medical data; determining, using the at least one processor, based on the model-specific indications, the indication of the medical condition. But for the recitation of generic computer components like a computer, a processor, and an output device, the italicized functions, when considered as a whole, describe a model selection and diagnostic process that could take place by a human actor such as a clinician managing their personal behavior and/or interactions with others. For example, a clinician could evaluate medical data in a patient test instance (e.g. a diagnostic image) for a property (e.g. imaging modality) and select appropriate diagnostic models (e.g. those that have previously been fitted or trained using a learning algorithm) based on the property (e.g. choose two or more models that are specific to MRI data when the imaging modality is MRI). The clinician could then either (1) look at the images and determine that a reliable determination of a diagnostic output is impossible (e.g. by observing that the images are of such poor quality that no useful diagnosis can be made with the selected diagnostic models) and notify a colleague that diagnosis is impossible; or (2) determine that a reliable diagnosis can be made and use the selected models to determine the indication of a medical condition (e.g. cancerous tumor) based on evaluating the model-specific outputs. Thus, the steps recited in this claim describe instructions that a human actor could follow to manage their personal behavior and/or interactions with others, and accordingly claim 16 recites an abstract idea in the form of a certain method of organizing human activity. Independent claim 27 also recites steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 27 recites: A computer-implemented medical data processing method for determining an indication of a medical condition, the method comprising: correlating, using at least one processor, parts of a medical image comprised in medical data of a test instance with parts of a reference image; and comparing, using the at least one processor, an image value of at least one part of the medical image with information associated with a correlated part of the reference image to obtain the indication of the medical condition, wherein the information has been generated by: matching, using the at least one processor, a plurality of training images to a base image to correlate parts of each of the training images with parts of the base image, wherein the base image is an atlas image generated based on a further plurality of images; determining, using the at least one processor, image values of at least one part of each of the plurality of training images correlated with a part of the base image, wherein the part of the base image is assigned to the correlated part of the reference image using a predetermined transformation; and determining, using the at least one processor, the information based on the determined image values of the at least one part of each of the plurality of training images, wherein the information is a statistical distribution function of image values of the at least one part of the plurality of training images; and triggering output of a notification of the indication on an output device. But for the recitation of generic computer components like a computer, a processor, and an output device, the italicized functions, when considered as a whole, describe a data correlation and determination process that could take place by a human actor such as a clinician managing their personal behavior and/or interactions with others. For example, a clinician could evaluate medical images to correlate test information with reference images from an atlas by comparing image values to statistical distributions in example cases to determine an indication of a medical condition (e.g. determining that a mass with certain darkness values in a certain quadrant of an MRI scan indicates a cancerous tumor as indicated by known statistical distributions in of darkness values in reference cases) and notify a colleague or other person of the diagnostic indication. Thus, the steps recited in this claim describe instructions that a human actor could follow to manage their personal behavior and/or interactions with others, and accordingly claim 27 recites an abstract idea in the form of a certain method of organizing human activity. Dependent claims 17-22, 24-26, and 28-30, and 33-36 inherit the limitations that recite an abstract idea from their dependence on claims 16 or 27, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 17-22, 24-26, and 33-36 recite additional limitations that further describe the abstract idea identified in the independent claims. Specifically, claims 17-18 specify the type of property used as a basis of selection for the models, each of which are types of properties that a clinician would be capable of evaluating as a basis for selecting diagnostic models. Claim 19 recites that the step of selecting the models includes comparing the property with a property associated with training data for each individual model, which a clinician could accomplish by choosing models based on the types of data used to initialize/train them (e.g. selecting a model specialized for women over the age of 50 when a patient has those characteristics). Claim 20 recites that the indication is determined based on at least one attribute chosen from a result of the comparing, performances of each of the models, and a degree of explainability of each model. A clinician could determine the medical indication based on the comparison or a performance of each model by selecting and using a model that most closely matches the desired type of input data or has the best performance metrics. Claim 21 recites that at least one of the models is optionally configured to provide an anomaly detection as the indication, which is a type of model that a human actor would be capable of selecting and using. Claim 22 describes several types of indications that may be output by the model, each of which are types of outputs that a clinician would be capable of gleaning from a selected diagnostic model. Claims 24-26 recite substantially similar limitations as claim 27 such that they are found to recite an abstract idea under a similar analysis as provided above for claim 27. Claim 33 specifies that the indication of the medical condition is a medical diagnosis, which a clinician would be capable of gleaning from analysis of a medical image and/or other patient data. Claims 34-35 specify that outputs of a model include out-of-distribution detection metrics, which a clinician could calculate by comparing the test instance to a set of data used to fit each model and determining whether the test instance is similar to the training data or an outlier (e.g. by noting that a given diagnostic model was fitted using data from patients aged 50-70, but a current test instance relates to data for a patient who is 4 years old and is thus out-of-distribution for that model). Claim 36 recites that at least one of the models is a density model configured to compare an image value of the medical image with a correlated reference image to obtain a model-specific indication of a medical condition, which a clinician could achieve by utilizing density-related image values in their reference image comparison method to output a diagnostic indication. However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 16 and 27 do not include additional elements that integrate the abstract idea into a practical application. The additional elements of claims 16 and 27 include that each method is computer-implemented, use of at least one processor to achieve the various steps, and outputting data on an output device. These additional elements, when considered in the context of each claim as a whole, merely serve to automate operations that could occur as a certain method of organizing human activity (as described above), and thus amount to implementation of an abstract idea using generic computer components (see MPEP 2106.05(f)). For example, a clinician would be capable of making model selections, comparing and correlating data, and making medical determinations based on test data. Use of a computer or processor to achieve these functions then amounts to the words “apply it” such that the otherwise-abstract steps are merely digitized and/or automated using generic computing components and do not provide integration into a practical application. Similarly, a clinician would be capable of outputting a notification to a colleague or other human actor as indicated above, and use of an output device as the medium for outputting information amounts to instructions to “apply” the otherwise-abstract step of sharing data between entities in a digital environment. The judicial exception recited in dependent claims 17-22, 24-26, 28-30, and 33-36 is also not integrated into a practical application under a similar analysis as above. The functions of claims 17-20, 22, 24-26, and 33-36 are performed with the same additional elements introduced in the independent claims, without introducing any new additional elements of their own, and accordingly also amount to mere instructions to apply the abstract idea using these same additional elements. Claim 21 specifies that at least one of the models is generated by an unsupervised learning algorithm using unlabeled training data of health patients, which merely indicates that a high-level type of machine learning algorithm is utilized for at least one of the models. Such a high-level indication of machine learning merely services to digitize and/or automate the otherwise-abstract diagnostic model such that it also amounts to the words “apply it” with a computer and does not provide integration into a practical application. Claims 28-30 recite additional high-level computer components such as at least one processor and at least one memory, a non-transitory computer program product, and one or more computer readable recording media for implementing the steps of claim 16, which also amounts to the words “apply it” with a computer as explained for similar high-level computer components recited in the independent claims. Accordingly, the additional elements of claims 16-22, 24-30, and 33-36 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 16-22, 24-30, and 33-36 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computer, at least one processor, and other high-level computing components like a memory and output device for performing the selecting, determining, comparing, matching, correlating, triggering, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes Pg 17 L23 – Pg 18 L7 of Applicant’s specification, where the exemplary computing system is disclosed at a high level in accordance with general-purpose processors, interfaces, and other computing elements known in the art. This disclosure does not indicate that the elements of the invention are particular machines and instead provide generic examples of computer hardware, such that one of ordinary skill in the art would understand that any generic computer processor, memory, and interface could be used to implement the invention. Further, the use of unsupervised machine learning as in claim 21 does not appear to be an inventive concept, as evidenced by the high-level explanation of known unsupervised learning techniques on Pg 9 L21 – Pg 10 L10 of Applicant’s specification. Examiner also notes that it is well-understood, routine, and conventional to utilize unsupervised machine learning for the purpose of medical diagnosis and anomaly detection, as evidenced by at least para. [0116] of Nachev (US 20130304710 A1); para. [0032] of Nye et al. (US 20190150857 A1); and para. [0051] of Zhang et al. (US 20210042916 A1). Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the computer implementation, unsupervised learning model, and output device in combination is to digitize and/or automate medical model selection and comparative diagnostic determination operations that could otherwise be achieved as a certain method of organizing human activity. Thus, when considered as a whole and in combination, claims 16-22, 24-30, and 33-36 are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 16-20, 22, 28-30, and 33 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (US 20140101080 A1). Claim 16 Lee teaches a computer-implemented medical data processing method for determining an indication of a medical condition (Lee abstract, [0112], noting computerized methods of diagnosing a lesion in a medical image, i.e. determining an indication of a medical condition), the method comprising: selecting, using at least one processor, based on at least one property associated with medical data of a test instance, at least two models out of a plurality of models, wherein each of the plurality of models is generated by a learning algorithm and configured to provide a model-specific indication of the medical condition based on the medical data (Lee Fig. 9, [0055], noting model selection unit searches for and automatically selects one or more (e.g. at least two as shown in the output of Fig. 7) categorized diagnostic models suitable for diagnosing a new patient based on extracted patient information and/or medical image data (i.e. at least one property associated with medical data of a test instance such as a diagnostic image) of the new patient; each diagnostic model is generated via machine learning and configured to provide a model-specific indication of a lesion as noted in [0053] & [0056]), and either: determining, using the at least one processor, that a reliable determination of the indication is impossible, if the at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models, thus triggering output of a notification on an output device, the notification informing a user that a reliable determination of the indication is impossible; or wherein, if it is determined that a reliable determination of the indication of the medical condition is possible (Lee Fig. 9, [0055], noting model selection unit automatically selects one or more categorized diagnostic models suitable for diagnosing a patient based on the input dataset; the identification and selection of at least one diagnostic model suitable for diagnosing the patient based on matching categorized data is considered equivalent to determining that a reliable determination of the indication of the medical condition of each identified model is possible because a suitable model has been found): determining, using the at least one processor, using each of the selected models, a respective model-specific indication of the medical condition based on the medical data (Lee Figs. 7 & 9, [0056], noting each selected model performs a diagnosis of a lesion in a patient image); and determining, using the at least one processor, based on the model-specific indications, the indication of the medical condition (Lee Figs. 7 & 9, [0057], noting the system determines an integrated diagnosis result based on the model-specific indications). Note: claim 16 includes the contingent limitation “determining, using the at least one processor, that a reliable determination of the indication is impossible, if the at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models, thus triggering output of a notification on an output device, the notification informing a user that a reliable determination of the indication is impossible.” Per MPEP 2111.04(II): “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) are not met.” Accordingly, the limitation cited above need not be taught by the prior art if the condition “if the at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models” is not met. In the instant case, Lee does not appear to make a determination that at least one property associated with the medical data of the test instance does not indicate suitability of the at least two models, such that the contingent limitation is not met and no further art is required under the broadest reasonable interpretation of the claim. Claim 17 Lee teaches the method of claim 16, and further teaches wherein the at least one property associated with the medical data comprises a feature of a medical image comprised in the medical data (Lee [0052], [0055], noting the categories of data used as a basis for training different models and selecting a suitable model for a patient include a feature of a medical image such as imaging device information). Claim 18 Lee teaches the method of claim 16, and further teaches wherein the at least one property associated with the medical data comprises a characteristic of a patient to which the medical data relates (Lee [0052], [0055], noting the categories of data used as a basis for training different models and selecting a suitable model for a patient include a feature of patient data such as clinical information, demographics, etc.). Claim 19 Lee teaches the method of claim 16, and further teaches wherein the step of selecting comprises comparing, using the at least one processor, the at least one property associated with the medical data of the test instance with at least one property associated with training data used for generating an individual model, for each individual model of the plurality of models (Lee Fig. 9, [0055], noting model selection unit searches for and automatically selects the categorized diagnostic models suitable for diagnosing a new patient based on matching (i.e. comparing) extracted patient information and/or medical image data (i.e. at least one property associated with medical data of a test instance such as a diagnostic image) with categories of data used to train each category-specific model (e.g. as in [0053])). Claim 20 Lee teaches the method of claim 19, and further teaches wherein the indication is determined further, using the at least one processor, based on at least one attribute chosen from a result of the comparing, empirical performances of each of the plurality of models and a degree of explainability of each of the plurality of models (Lee [0057], noting the integrated diagnosis result is determined at least based on the results of each selected categorized model (i.e. based on a result of the comparing step of claim 19 indicating that such a model should be selected for diagnosis), and/or based on preference data for each model that may indicate high or low precision of a model (i.e. empirical performances of each of the models)). Claim 22 Lee teaches the method of claim 16, and further teaches wherein the model-specific indication of the medical condition and/or the indication of the medical condition comprises at least one result chosen from probabilities of an anomaly for different parts of a medical image comprised in the medical data and a numerical value describing a probability of an anomaly of the overall medical data, wherein the numerical value is optionally derived from the probabilities of the anomaly for the different parts of the medical image (Lee Fig. 7, [0057], noting the integrated diagnosis result (i.e. indication of the medical condition) includes a numerical probability of the lesion in an image being benign or malignant, i.e. an anomaly). Claim 28 Lee teaches an apparatus comprising at least one processor and at least one memory, the at least one memory containing instructions executable by the at least one processor such that the apparatus unit is operable to perform the method of claim 16 (Lee [0112]-[0113]). Claim 29 Lee teaches a non-transitory computer program product comprising program code portions for performing the method of claim 16 when the computer program product is executed on one or more processors (Lee [0112]-[0113]). Claim 30 Lee teaches the non-transitory computer program product of claim 29, stored on one or more computer readable recording media (Lee [0112]-[0113]). Claim 33 Lee teaches the computer-implemented medical data processing method of claim 16, and further teaches wherein the indication of the medical condition is a medical diagnosis (Lee [0056], noting the models may diagnose a lesion of a patient as malignant or benign). Claims 27 and 36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Poole (US 20130044927 A1). Claim 27 Poole teaches a computer-implemented medical data processing method for determining an indication of a medical condition (Poole abstract, [0015]-[0016], noting a processor-based computer system for detecting the presence of abnormalities in medical images, i.e. determining an indication of a medical condition), the method comprising: correlating, using at least one processor, parts of a medical image comprised in medical data of a test instance with parts of a reference image (Poole [0021], [0062]-[0063], noting a patient image data set is registered to a statistical atlas, i.e. parts of a medical image are correlated with parts of a reference image); and comparing, using the at least one processor, an image value of at least one part of the medical image with information associated with a correlated part of the reference image to obtain the indication of the medical condition (Poole [0021], [0064]-[0069], [0073], noting values of each voxel of the registered patient image and atlas image are compared to detect abnormalities, i.e. an image value of at least one part of the medical image is compared with information associated with a correlated part of the reference image to obtain an indication of a medical condition), wherein the information has been generated by: matching, using the at least one processor, a plurality of training images to a base image to correlate parts of each of the training images with parts of the base image, wherein the base image is an atlas image generated based on a further plurality of images (Poole [0038]-[0047], noting training images are iteratively aligned with a statistical atlas (i.e. base image) that has been generated from previous rounds of iterative alignment); determining, using the at least one processor, image values of at least one part of each of the plurality of training images correlated with a part of the base image (Poole [0029]-[0039], noting the method maintains a mean vector and covariance matrix for each voxel of the registered training images representing texture values of each aligned voxel in the correlated training images), wherein the part of the base image is assigned to the correlated part of the reference image using a predetermined transformation (Poole [0038]-[0039], noting the training images are registered (i.e. correlated) to the current iteration of the statistical atlas (i.e. the base image) using a predetermined transformation Tk); determining, using the at least one processor, the information based on the determined image values of the at least one part of each of the plurality of training images, wherein the information is a statistical distribution function of image values of the at least one part of the plurality of training images (Poole [0025], [0037], [0058], noting the method obtains mean and inverted covariance matrices representing statistical distributions (e.g. multivariate Gaussian models) of texture values of each aligned voxel in the correlated training images); and triggering output of a notification of the indication on an output device (Poole [0021], [0075], noting locations of detected abnormalities are displayed to a user (e.g. on the display device of [0015]) via highlighting or color-coding, considered equivalent to triggering output of a notification of the indication to an output device because the user is notified of the detected abnormalities at a display device). Claim 36 Poole teaches the computer-implemented medical data processing method of claim 27, and the combination further teaches wherein the at least one model in the plurality of models is a density model configured to compare an image value of at least one part of the medical image comprised in or represented by the medical data with information associated with a correlated part of the reference image to obtain the model-specific indication of the medical condition (Poole [0021], [0064]-[0069], [0073], noting abnormality detection module compares values each voxel of the registered patient image and atlas image to detect abnormalities (i.e. indications of a medical condition); the abnormality detection module is thus considered equivalent to a density model as disclosed on Pg 12 of Applicant’s specification and having the configuration recited in this claim because it performs equivalent functions). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lee as applied to claim 16 above, and further in view of Zhang et al. (US 20210042916 A1). Claim 21 Note: claim 21 includes the optional limitation “optionally, configured to provide an anomaly detection as the model-specific indication of the medical condition.” Per MPEP 2111.04(I): “Claim scope is note limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” Accordingly, the optional limitation cited above need not be taught by the prior art under the broadest reasonable interpretation of claim 21. However, in the interest of compact prosecution, this limitation is addressed with prior art below. Lee teaches the method of claim 16, and further teaches wherein at least one of the models comprised in the plurality of models is generated, using the at least one processor, by an (Lee [0066], [0092], noting a variety of learning algorithms may be used to generate the categorized models using training data of both malignant (i.e. diseased) and benign (i.e. healthy) patients; the models are trained to distinguish between malignant and benign lesions, which is considered equivalent to providing anomaly detection because malignant lesions are an anomalous medical indication). In summary, Lee teaches that “nearly all types of machine learning algorithms” may be utilized to train the diagnostic models (see [0066]), but fails to explicitly disclose an unsupervised learning algorithm using unlabeled training data. However, Zhang teaches use of an unsupervised learning algorithm using unlabeled training data to train diagnostic models (Zhang [0051]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the learning methods of Lee to include unsupervised learning via unlabeled training data as in Zhang because Lee already contemplates “nearly all types of machine learning algorithms” and Zhang notes that “unsupervised learning is useful for identifying the features that are most useful for classifying raw data into separate cohorts” (see [0051]). Claim 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Lee and Zhang as applied to claim 21 above, and further in view of Maier-Hein et al. (US 20220008157 A1). Claim 34 Note: claim 34 includes the optional limitation “wherein the at least one model may provide a model-specific indication of a medical condition” (emphasis added). Per MPEP 2111.04(I): “Claim scope is note limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” Accordingly, the optional limitation cited above, as well as the remaining limitation of the claim further limiting the optional limitation, need not be taught by the prior art under the broadest reasonable interpretation of the claim. However, in the interest of compact prosecution, these limitations are addressed with prior art below. Lee in view of Zhang teaches the computer-implemented medical data processing method of claim 21, and the combination further teaches wherein the at least one model may provide a model-specific indication of a medical condition, (Lee Figs. 7 & 9, [0056], noting each selected model performs a diagnosis of a lesion in a patient image). However, the present combination fails to explicitly disclose wherein the model-specific indication is a score indicating whether an unseen test instance is in- or out-of-distribution compared to a distribution of the training data. However, Maier-Hein teaches a method for evaluating medical images with selected image analysis models that includes using a model to provide a score indicating whether an unseen test instance is in- or out-of-distribution compared to a distribution of the training data (Maier-Hein [0078]-[0080], [0355]-[0357]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the image analysis methods of the combination to include use of a model providing an out-of-distribution detection score as in Maier-Hein in order to provide information about the closeness of a new image to the set of training data used to train an analysis model so that an algorithm’s accuracy can be confirmed and any spurious results that would not be meaningful can be avoided (as suggested by Maier-Hein [0079] & [0355]-[0357]). Claim 35 Note: claim 35 further limits the optional limitation from parent claim 21 regarding providing an anomaly detection as the model-specific indication of a medical condition. Per MPEP 2111.04(I): “Claim scope is note limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” Accordingly, all of claim 35 need not be taught by the prior art under the broadest reasonable interpretation of the claim. However, in the interest of compact prosecution, this claim is addressed with prior art below. Lee in view of Zhang teaches the computer-implemented medical data processing method of claim 21, but the combination fails to explicitly disclose wherein the anomaly detection corresponds to an out-of-distribution detection. However, Maier-Hein teaches a method for evaluating medical images with selected image analysis models that includes using a model to provide anomaly detection such as out-of-distribution detection (Maier-Hein [0078]-[0080], [0355]-[0357]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the image analysis methods of the combination to include use of a model providing out-of-distribution detection as in Maier-Hein in order to provide information about the closeness of a new image to the set of training data used to train an analysis model so that an algorithm’s accuracy can be confirmed and any spurious results that would not be meaningful can be avoided (as suggested by Maier-Hein [0079] & [0355]-[0357]). Claims 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Lee as applied to claim 16 above, and further in view of Poole. Claim 24 Lee teaches the method of claim 16, and further shows use of quantitative feature values extracted from images to obtain the model-specific indications (Lee [0092]). However, the present combination does not teach correlation to reference images and comparison of the extracted quantitative feature values to information from the reference image, and thus fails to explicitly disclose: wherein at least one of the models comprised in the plurality of models correlates parts of a medical image comprised in the medical data with parts of a reference image, and compares an image value of at least one part of the medical image with information associated with a correlated part of the reference image to obtain the model-specific indication of the medical condition, wherein the information has been generated by: matching a plurality of training images to a base image to correlate parts of each of the training images with parts of the base image; determining image values of at least one part of each of the plurality of training images correlated with a part of the base image, wherein the part of the base image is assigned to the correlated part of the reference image using a predetermined transformation; and determining the information based on the determined image values of the at least one part of each of the plurality of training images. However, Poole teaches an analogous medical image diagnostic method in which: a diagnostic model correlates parts of a medical image with parts of a reference image and compares an image value of at least one part of the medical image with information associated with a correlated part of the reference image to obtain an indication of a medical condition (Poole [0021], [0062]-[0069], noting a patient image data set is registered (i.e. correlated) to a statistical atlas and values of each voxel (i.e. part) of the registered patient image and atlas image are compared to detect abnormalities (i.e. an indication of a medical condition)), wherein the information has been generated by: matching a plurality of training images to a base image to correlate parts of each of the training images with parts of the base image (Poole [0038]-[0047], noting training images are iteratively aligned with a statistical atlas (i.e. base image) that has been generated from previous rounds of iterative alignment); determining image values of at least one part of each of the plurality of training images correlated with a part of the base image (Poole [0029]-[0039], noting the method maintains a mean vector and covariance matrix for each voxel of the registered training images representing texture values of each aligned voxel in the correlated training images), wherein the part of the base image is assigned to the correlated part of the reference image using a predetermined transformation (Poole [0038]-[0039], noting the training images are registered (i.e. correlated) to the current iteration of the statistical atlas (i.e. the base image) using a predetermined transformation Tk); and determining the information based on the determined image values of the at least one part of each of the plurality of training images (Poole [0025], [0037], [0058], noting the method obtains mean and inverted covariance matrices representing statistical distributions (e.g. multivariate Gaussian models) of texture values of each aligned voxel in the correlated training images). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the diagnostic image analysis method of Lee to include a specific method based on alignment and comparison with a generated atlas as in Poole in order to utilize widely practiced diagnosis methods that enable direct comparisons to be performed between image data obtained from different subjects so that more efficient identification and highlighting of abnormal regions of interest may be facilitated (as suggested by Poole [0002] & [0004]). Claim 25 Lee in view of Poole teaches the method of claim 24, and the combination further teaches wherein the information comprises or is a statistical distribution function of image values of the at least one part of the plurality of training images (Poole [0025], [0037], [0058], noting the atlas data includes mean and inverted covariance matrices representing statistical distributions (e.g. multivariate Gaussian models) of texture values of each aligned voxel in the correlated training images). Claim 26 Lee in view of Poole teaches the method of claim 24, and the combination further teaches wherein the information comprises an average image value of the at least one part of all of the plurality of training images and, optionally, a mean deviation of the image values of the at least one part of all of the plurality of training images from the average image value (Poole [0025], [0037], [0058], noting the atlas data includes mean (i.e. average) and inverted covariance matrices representing texture values of each aligned voxel in the correlated training images). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nenoki et al. (US 20200279652 A1) and Baumann (US 20110275908 A1) describe systems for selecting appropriate/suitable diagnosis support algorithms based on properties of the input dataset. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAREN A HRANEK whose telephone number is (571)272-1679. The examiner can normally be reached M-F 8:00-4:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAREN A HRANEK/ Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Aug 26, 2022
Application Filed
Jun 14, 2024
Non-Final Rejection — §101, §102, §103
Dec 23, 2024
Response Filed
Feb 18, 2025
Non-Final Rejection — §101, §102, §103
May 21, 2025
Response Filed
Jun 11, 2025
Final Rejection — §101, §102, §103
Dec 10, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month