Prosecution Insights
Last updated: April 19, 2026
Application No. 18/629,023

COMPUTER-AIDED DIAGNOSIS SYSTEM FOR PULMONARY NODULE ANALYSIS USING PCCT IMAGES

Non-Final OA §101§103§112
Filed
Apr 08, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The IDSs dated 8 April 2024 and 12 August 2025has been considered and placed in the application file. 1st Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claims 6, 7, 17 and 18 recite “at least one of.” Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. 2nd Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in this Office action. Such claim limitation(s) is/are: “means for determining image acquisition parameters” in claim 10; “means for receiving one or more PCCT images” in claim 10; “means for performing one or more medical imaging analysis tasks” in claim 10; “means for outputting results “ in claim 10; “means for acquiring a plurality of candidate PCCT images” in claim 11; “means for presenting the plurality of candidate PCCT images to a user” in claim 11; “means for receiving input” in claim 11; “means for determining the image acquisition parameters” in claim 11; “means for acquiring a plurality of candidate PCCT images” in claim 12; “means for determining the image acquisition parameters” in claim 12; and “means for determining the image acquisition parameters” in claim 13. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 11 and 16 are rejected under 35 U.S.C. § 112(b) as being indefinite for claiming both an apparatus and a process of using the apparatus. When both an apparatus and a method are claimed in the same claim it is unclear whether infringement occurs when the apparatus is constructed or when the apparatus is used. Therefore the scope of the claim is indefinite. See MPEP 2173.05(p). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 and 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process using images/ drawings (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc.), and mathematical calculations for likelihood/ probability (e.g., - P(A) = f / N Where P(A) = Probability of an event (event A) occurring; f = Number of ways an event can occur (frequency); N = Total number of outcomes possible). This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1 and 15 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Claim 1 is directed to a method, i.e., process, and claim 15 is directed to an computer readable storage medium i.e., a machine. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e., abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The method in claim 1, for example, comprises a mental process that can be practicably performed in the human mind therefore, an abstract idea. Claim 1 recites: determining image acquisition parameters… receiving one or more PCCT images… performing one or more medical imaging analysis tasks… outputting results… These limitations, as drafted, under their broadest reasonable interpretation, cover performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could present image(s)/ drawing(s) and analyze the images/drawings, and output results. The mere nominal recitation that the various steps are being executed by a processor (e.g., processing unit)/machine learning based model does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process. If a claim limitation, under its broadest reasonable interpretation, covers performance of a mental step which could be performed with a simple tool such as a pen and paper, then it falls within the “mental steps” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Thus, Claims 1-9 and 15-20 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Thus, since Claims 1 and 15 are/is: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, claim 1, 10 and 15 are not eligible subject matter under 35 U.S.C 101. Similar analysis is made for the dependent claims 2-9, 11-14 and 16-20 and the dependent claims are similarly identified as: being directed towards an abstract idea, not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0122073 A1, (Ozdemir et al.). Claim 1 [AltContent: textbox (Ozdemir et al. Fig. 14, showing the results of a pulmonary CT that does medical image analysis.)] PNG media_image1.png 493 735 media_image1.png Greyscale Regarding Claim 1, Ozdemir et al. teach a computer-implemented method ("a system and method for detecting and/or characterizing a property of interest in a multi-dimensional space," paragraph [0009])comprising: determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images ("the medium can be energy emitted by a (e.g.) Positron Emission Tomography (PET) scan tracer particle, or photons emitted due to optically or electrically excited molecules, as occurs in Raman spectroscopy. All forms of electromagnetic, particle and/or photonic energy can characterize the medium measured herein," paragraph [0031] and "iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level" paragraph [0013]); receiving one or more PCCT images of an anatomical object of a patient acquired using the PCCT image acquisition device configured with the image acquisition parameters ("the acquired data is medical image data, including at least one of CT scan images, MM images, or targeted contrast ultrasound images of human tissue," paragraph [0011]); performing one or more medical imaging analysis tasks analyzing the anatomical object based on the one or more PCCT images using one or more machine learning based models ("The image data can be preprocessed as appropriate to include edge information, blobs, etc. (for example based on image analysis conducted using appropriate, commercially available machine vision tools)," paragraph [0032] where machine vision is machine learning); and outputting results of the one or more medical imaging analysis tasks ("A GUI process(or) 156 organizes and displays (or otherwise presents ( e.g. for storage)) the analyzed data results in a graphical and/or textual format for a user to employ in performing a related task," paragraph [0033]). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Ozdemir et al. explicitly motivates doing so at least in paragraphs [0008], [0031] and [0068] including “Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments.” and otherwise motivating experimentation and optimization. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 10 and computer readable media claim 15 while noting that the rejection above cites to both device and method disclosures. Claims 10 and 15 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Ozdemir et al. teach the computer-implemented method of claim 1, wherein determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: acquiring a plurality of candidate PCCT images using varying image acquisition parameters ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images); presenting the plurality of candidate PCCT images to a user ("Bayesian technique not only improves the final detection performance, but it also provides a model confidence level to the end user (see FIGS. 13 and 14, by way of example)," paragraph [0054] where model confidence teaches a plurality of candidates with different confidences); receiving input from the user selecting one of the plurality of candidate PCCT images ("CAD systems process digital images for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the practitioner," paragraph [0004]); and determining the image acquisition parameters as parameters corresponding to the selected candidate PCCT image ("iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level," paragraph [0013]). Claim 3 Regarding claim 3, Ozdemir et al. teach the computer-implemented method of claim 1, wherein determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: acquiring a plurality of candidate PCCT images using varying image acquisition parameters ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images); identifying one of the plurality of candidate PCCT images as having a highest analytical accuracy for performing the one or more medical imaging analysis tasks using the one or more machine learning based models ("It should be clear that such an automated system generally improves accuracy of diagnosis, and speeds/improves both treatment decisions and outcomes," paragraph [0058] where improving accuracy includes identifying the higher accuracy); and determining the image acquisition parameters as parameters corresponding to the identified candidate PCCT image ("iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level," paragraph [0013]). Claim 4 Regarding claim 4, Ozdemir et al. teach the computer-implemented method of claim 1, wherein determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: determining the image acquisition parameters of the PCCT image acquisition device for acquiring PCCT images optimized for performing the one or more medical imaging analysis tasks ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images). Claim 5 Regarding claim 5, Ozdemir et al. teach the computer-implemented method of claim 1, wherein the image acquisition parameters comprise a number of energy bands and associated energy thresholds ("Early detection of pulmonary nodules is crucial for early diagnosis of lung cancer. CADe of pulmonary nodules using low-dose computed tomography (CT)," paragraph [0006] where low dose teaches energy bands and thresholds). Claim 6 Regarding claim 6, Ozdemir et al. teach the computer-implemented method of claim 1, wherein the image acquisition parameters comprise at least one of reconstructed image spacing, slice thickness, reconstruction kernels, or dose ("iteratively adjusting one or more image acquisition parameter(s) (e.g. camera focus, exposure time, X-ray/RADAR/ SONAR/LIDAR power level, frame rate, etc.) in a manner that optimizes/enhances the confidence level associated with detection of the property of interest," paragraph [0013]). Claim 7 Regarding claim 7, Ozdemir et al. teach the computer-implemented method of claim 1, wherein the one or more medical imaging analysis tasks comprise at least one of detection, segmentation, size quantification, typology classification, or malignancy assessment of the anatomical object of the patient ("preprocessing for reduction of artifacts, image noise reduction, leveling (harmonization) of image quality (increased contrast) for clearing the image parameters (e.g. different exposure settings), and filtering; (b) segmentation for differentiation of different structures in the image (e.g. heart, lung, ribcage, blood vessels, possible round lesions, matching with anatomic database, and sample gray-values in volume of interest); (c) structure/ROI (Region of Interest) analysis, in which a detected region is analyzed individually for special characteristics, which can include compactness, form, size and location, reference to close-by structures/ ROis, average grey level value analysis within the ROI, and proportion of grey levels to the border of the structure inside the ROI," paragraph [0005]). Claim 8 Regarding claim 8, Ozdemir et al. teach the computer-implemented method of claim 1, wherein the one or more machine learning based models are trained using annotated PCCT training images ("Lung Image Database Consortium image collection (LIDC-IDRI), which consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions," paragraph [0051]). Claim 9 Regarding claim 9, Ozdemir et al. teach the computer-implemented method of claim 1, wherein the anatomical object comprises a pulmonary nodule of the patient ("Lung Image Database Consortium image collection (LIDC-IDRI), which consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions," paragraph [0051]). Claim 10 Regarding claim 10, Ozdemir et al. teach an apparatus ("a system and method for detecting and/or characterizing a property of interest in a multi-dimensional space," paragraph [0009]) comprising: means for determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images ("the medium can be energy emitted by a (e.g.) Positron Emission Tomography (PET) scan tracer particle, or photons emitted due to optically or electrically excited molecules, as occurs in Raman spectroscopy. All forms of electromagnetic, particle and/or photonic energy can characterize the medium measured herein," paragraph [0031] and "iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level" paragraph [0013]); means for receiving one or more PCCT images of an anatomical object of a patient acquired using the PCCT image acquisition device configured with the image acquisition parameters ("the acquired data is medical image data, including at least one of CT scan images, MM images, or targeted contrast ultrasound images of human tissue," paragraph [0011]); means for performing one or more medical imaging analysis tasks analyzing the anatomical object based on the one or more PCCT images using one or more machine learning based models ("The image data can be preprocessed as appropriate to include edge information, blobs, etc. (for example based on image analysis conducted using appropriate, commercially available machine vision tools)," paragraph [0032] where machine vision is machine learning); and means for outputting results of the one or more medical imaging analysis tasks ("A GUI process(or) 156 organizes and displays (or otherwise presents ( e.g. for storage)) the analyzed data results in a graphical and/or textual format for a user to employ in performing a related task," paragraph [0033]). Claim 11 Regarding claim 11, Ozdemir et al. teach the apparatus of claim 10, wherein the means for determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: means for acquiring a plurality of candidate PCCT images using varying image acquisition parameters ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images); means for presenting the plurality of candidate PCCT images to a user "Bayesian technique not only improves the final detection performance, but it also provides a model confidence level to the end user (see FIGS. 13 and 14, by way of example)," paragraph [0054] where model confidence teaches a plurality of candidates with different confidences); means for receiving input from the user selecting one of the plurality of candidate PCCT images ("Bayesian technique not only improves the final detection performance, but it also provides a model confidence level to the end user (see FIGS. 13 and 14, by way of example)," paragraph [0054] where model confidence teaches a plurality of candidates with different confidences); and means for determining the image acquisition parameters as parameters corresponding to the selected candidate PCCT image ("iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level," paragraph [0013]). Claim 12 Regarding claim 12, Ozdemir et al. teach the apparatus of claim 10, wherein the means for determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: means for acquiring a plurality of candidate PCCT images using varying image acquisition parameters ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images); means for identifying one of the plurality of candidate PCCT images as having a highest analytical accuracy for performing the one or more medical imaging analysis tasks using the one or more machine learning based models ("It should be clear that such an automated system generally improves accuracy of diagnosis, and speeds/improves both treatment decisions and outcomes," paragraph [0058] where improving accuracy includes identifying the higher accuracy); and means for determining the image acquisition parameters as parameters corresponding to the identified candidate PCCT image ("iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level," paragraph [0013]). Claim 13 Regarding claim 13, Ozdemir et al. teach the apparatus of claim 10, wherein the means for determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: means for determining the image acquisition parameters of the PCCT image acquisition device for acquiring PCCT images optimized for performing the one or more medical imaging analysis tasks ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images). Claim 14 Regarding claim 14, Ozdemir et al. teach the apparatus of claim 10, wherein the image acquisition parameters comprise a number of energy bands and associated energy thresholds ("Early detection of pulmonary nodules is crucial for early diagnosis of lung cancer. CADe of pulmonary nodules using low-dose computed tomography (CT)," paragraph [0006] where low dose teaches energy bands and thresholds). Claim 15 Regarding claim 15, Ozdemir et al. teach a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations ("a system and method for detecting and/or characterizing a property of interest in a multi-dimensional space," paragraph [0009]) comprising: determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images ("the medium can be energy emitted by a (e.g.) Positron Emission Tomography (PET) scan tracer particle, or photons emitted due to optically or electrically excited molecules, as occurs in Raman spectroscopy. All forms of electromagnetic, particle and/or photonic energy can characterize the medium measured herein," paragraph [0031] and "iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level" paragraph [0013]); receiving one or more PCCT images of an anatomical object of a patient acquired using the PCCT image acquisition device configured with the image acquisition parameters ("the acquired data is medical image data, including at least one of CT scan images, MM images, or targeted contrast ultrasound images of human tissue," paragraph [0011]); performing one or more medical imaging analysis tasks analyzing the anatomical object based on the one or more PCCT images using one or more machine learning based models ("The image data can be preprocessed as appropriate to include edge information, blobs, etc. (for example based on image analysis conducted using appropriate, commercially available machine vision tools)," paragraph [0032] where machine vision is machine learning); and outputting results of the one or more medical imaging analysis tasks ("A GUI process(or) 156 organizes and displays (or otherwise presents ( e.g. for storage)) the analyzed data results in a graphical and/or textual format for a user to employ in performing a related task," paragraph [0033]). Claim 16 Regarding claim 16, Ozdemir et al. teach the non-transitory computer-readable storage medium of claim 15, wherein determining image acquisition parameters of a PCCT (photon-counting computed tomography) image acquisition device for acquiring PCCT images comprises: acquiring a plurality of candidate PCCT images using varying image acquisition parameters ("This system and method acquires a first set of images, analyzes the first set of images to detect the property of interest and a confidence level associated with the detection," paragraph [0013] where the first set of images are the candidate images); presenting the plurality of candidate PCCT images to a user ("Bayesian technique not only improves the final detection performance, but it also provides a model confidence level to the end user (see FIGS. 13 and 14, by way of example)," paragraph [0054] where model confidence teaches a plurality of candidates with different confidences); receiving input from the user selecting one of the plurality of candidate PCCT images ("CAD systems process digital images for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the practitioner," paragraph [0004]); and determining the image acquisition parameters as parameters corresponding to the selected candidate PCCT image ("iteratively adjusts at least one image acquisition parameter (e.g. camera focus, exposure time, radar power level, frame rate, etc.) in a manner that optimizes or enhances the confidence level," paragraph [0013]). Claim 17 Regarding claim 17, Ozdemir et al. teach the non-transitory computer-readable storage medium of claim 15, wherein the image acquisition parameters comprise at least one of reconstructed image spacing, slice thickness, reconstruction kernels, or dose ("iteratively adjusting one or more image acquisition parameter(s) (e.g. camera focus, exposure time, X-ray/RADAR/ SONAR/LIDAR power level, frame rate, etc.) in a manner that optimizes/enhances the confidence level associated with detection of the property of interest," paragraph [0013]). Claim 18 Regarding claim 18, Ozdemir et al. teach the non-transitory computer-readable storage medium of claim 15, wherein the one or more medical imaging analysis tasks comprise at least one of detection, segmentation, size quantification, typology classification, or malignancy assessment of the anatomical object of the patient ("preprocessing for reduction of artifacts, image noise reduction, leveling (harmonization) of image quality (increased contrast) for clearing the image parameters (e.g. different exposure settings), and filtering; (b) segmentation for differentiation of different structures in the image (e.g. heart, lung, ribcage, blood vessels, possible round lesions, matching with anatomic database, and sample gray-values in volume of interest); (c) structure/ROI (Region of Interest) analysis, in which a detected region is analyzed individually for special characteristics, which can include compactness, form, size and location, reference to close-by structures/ ROis, average grey level value analysis within the ROI, and proportion of grey levels to the border of the structure inside the ROI," paragraph [0005]). Claim 19 Regarding claim 19, Ozdemir et al. teach the non-transitory computer-readable storage medium of claim 15, wherein the one or more machine learning based models are trained using annotated PCCT training images ("Lung Image Database Consortium image collection (LIDC-IDRI), which consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions," paragraph [0051]). Claim 20 Regarding claim 20, Ozdemir et al. teach the non-transitory computer-readable storage medium of claim 15, wherein the anatomical object comprises a pulmonary nodule of the patient ("Lung Image Database Consortium image collection (LIDC-IDRI), which consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions," paragraph [0051]). Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2021 0233645 A1 to Morard et al. discloses methods and systems are provided for determining and characterizing features of an anatomical structure from a medical image. In one example, a method comprises acquiring a plurality of medical images over time during an exam, registering the segmented anatomical structure between the plurality of medical images, segmenting an anatomical structure in a one of the plurality of medical images after registering the plurality of medical images, creating and characterizing a reference region of interest (ROI) in each of the plurality of medical images. US Patent Publication 2022 0115117 A1 to Hartkens et al. discloses configuring a medical imaging device by a set of acquisition parameters for acquiring a medical image is disclosed. A method implementation of the technique comprises selecting (S102) a reference image from a plurality of reference images, each of the plurality of reference images being stored in association with a set of acquisition parameters which has been used to acquire the respective reference image, and configuring (S104) the medical imaging device by the set of acquisition parameters stored in association with the selected reference image for acquiring the medical image. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 23 January 2026
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month