Prosecution Insights
Last updated: April 19, 2026
Application No. 18/346,292

TRAINING AI SYSTEMS ON PHOTON COUNTING DATA

Final Rejection §103
Filed
Jul 03, 2023
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on February 13, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Claims 1-2, 6-10 and 14-19 have been amended changing the scope and contents of the claim. Claims 3-5 and 12-13 have been cancelled. Applicant’s amendment filed January 16, 2026 overcomes the following objection/rejection(s) from the last Office Action of January 16, 2026: Previous objections to the claims for minor informalities Rejections of the claims under 35 USC § 102 Response to Arguments Applicant's arguments filed January 16, 2026 have been fully considered but they are not persuasive. Regarding claim 1 (and similarly claim 9, 14 and 19), applicant argues, “However, the cited portions of Taher do not teach or suggest that the domain-specific datasets comprise PCCT virtual images. The cited portions of Taher are silent with regards to PCCT virtual images (Remarks, 9).” The examiner respectfully disagrees. Taher is directed toward, “pre-training an AI model on different input images which are not included in the training data by executing self-supervised learning for the AI model; fine-tuning the pre-trained AI model to generate a pre-trained diagnosis and detection AI model; applying the pre-trained diagnosis and detection AI model to a new medical image to render a prediction as to the presence or absence of a disease within the new medical image; and outputting the prediction as a predictive medical diagnosis for a medical patient (abstract).” Specifically, Taher is relied upon for teaching the fact that it is possible to perform pre-training on one data set, and then performing additional training on the domain specific data set (which is read as the fine tuning process). The examiner agrees, that Taher does not specifically disclose the PCCT aspect, however, PCCT is read as a domain specific data set, which one of ordinary skill in the art would be aware of. Further, Taher notes at paragraph 0024, that their method bridges the gap between natural and medical images by pretraining ImageNet models on the medical images. Thus, clearly Taher discloses the benefits of their method in the field of medical imaging, and one of ordinary skill in the art before the effective filing date of the claimed invention would be well aware that PCCT is a medical imaging domain. Claim Objections Claim 11 is objected to because of the following informalities: Claim 11 mirrors now canceled claim 3. It appears claim 11 should have been cancelled based on the fact that the claim language now contradicts the independent claim from which it depends. Claim 11 depends on claim 9, and claim 11 claims, “training the machine learning based network based only on the one or more PCCT virtual images.” However, claim 9 already requires training based on the non-photon counting data and the PCCT virtual images. Thus, claim 11 now conflicts the independent claim it depends on and thus should be cancelled. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action (see below). “means for receiving one or more input medical images” in claim 9 “means for performing a medical imaging analysis task” in claim 9 “means for outputting results” in claim 9 Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 6-10 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over EP3695784 (hereinafter EP ‘784), and further in view of U.S. Publication No. 2023/0116897 to Hosseinzadeh Taher et al. (hereinafter Taher). Regarding independent claim 1, EP ‘784 discloses A computer-implemented method (paragraph 0001, “The present invention relates to assessing coronary microvascular dysfunction, and in particular to a decision-support system and a method for providing coronary microvascular dysfunction assessment;” paragraph 0044-0045, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method. According to another aspect of the invention, a computer readable medium is provided having stored the program element.”) comprising: receiving one or more input medical images (Figure 2A, element “input data (1);” paragraph 0049, “Fig. 2A shows a deep learning model for segmenting the cardiac computed tomography angiography data according to an exemplary embodiment of the present disclosure.”); performing a medical imaging analysis task based on the one or more input medical images using a trained machine learning based network (Figure 2A, element 10 “deep neural net;” paragraph 0049, “Fig. 2A shows a deep learning model for segmenting the cardiac computed tomography angiography data according to an exemplary embodiment of the present disclosure.”); and outputting results of the medical imaging analysis task (Figure 2A, element 19, “output: segmentation map”), wherein the trained machine learning based network is trained (Figure 2B) by: receiving 1)PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.”); generating one or more PCCT virtual images from the PCCT imaging data (paragraph 0055, “In addition, spectral CT data may be acquired using a photon-counting scanner;” the virtual image is read as stored in a digital format); training the machine learning based network for performing the medical imaging analysis task based on the one or more PCCT virtual images (Figure 2B; the output of the neural network being a segmentation map is compared to the expert annotated segmentation map; the task is read as generation of a segmentation map); and outputting the trained machine learning based network (Figure 2B exemplifies how the network is trained, and the continuous updating of the network parameters (element 24); paragraph 0058, “Fig. 2B illustrates an example of the training procedure for the deep neural network 10. To train the deep neural network 10, a reference may be provided containing one or more expert annotated segmentation maps 20. The training procedure may start with initializing the network parameters 24 randomly, then adjust them to produce label map representing the different anatomical structures within the heart as closer as possible to the actual segmentation map produced by a human expert, i.e. the expert annotated segmentation map 20, according to a predefined loss-function 22;” the network that generates an output as close as possible to the real output is read as the trained network). EP ‘784 fails to explicitly disclose as further recited. However, Taher discloses receiving 1) PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (paragraph 0102, “At block 1105, processing logic of such a system receives a plurality of medical images;” PCCT images are a well known type of medical images) and 2) non-photon-counting data (paragraph 0106, “At block 1125, processing logic pre-trains an AI model on different images through self-supervised learning via each of multiple different experiments.”); Training the machine learning based network for performing the medical imaging analysis task by: pre-training the machine learning based network based on the non-photon-counting data (paragraph 0054, “This is a sequential pre-training approach in which a model is first pre-trained on a massive general dataset, such as ImageNet, and then pre-trained on domain-specific datasets, resulting in domain-adapted pre-trained models;” paragraph 0040, “coarse-grained natural image datasets, such as ImageNet;” ImageNet is read as a non-photon-counting data; paragraph 0024, “Furthermore, devised and disclose herein is a practical approach to bridge the domain gap between natural and medical images by continually pre-training supervised ImageNet models on medical images”), and fine-tuning the pre-trained machine learning based network based on the one or more PCCT virtual images (paragraph 0054, “This is a sequential pre-training approach in which a model is first pre-trained on a massive general dataset, such as ImageNet, and then pre-trained on domain-specific datasets, resulting in domain-adapted pre-trained models;” domain-specific data set is read as the specific PCCT virtual images which are a type of medical image). EP ‘784 is directed toward “The present invention relates to assessing coronary microvascular dysfunction, and in particular to a decision-support system and a method for providing coronary microvascular dysfunction assessment (paragraph 0001).” Taher is directed toward “Described herein are means for implementing systematic benchmarking analysis to improve transfer learning for medical image analysis (abstract).” It can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention that both EP ‘784 and Taher are directed toward similar methods of endeavor of image processing in the medical imaging field. Further, one of ordinary skill in the art is well aware there is not always training data available for multiple types of data sets, to aid in training (as further evidenced in paragraph 0026 of Taher). Thus, had there not been enough data in the medical image domain to be used for training, it would have been obvious to a person having ordinary skill in the art at the time the claimed invention was filed to incorporate the teaching of Taher to utilize transfer learning from a natural image domain to a medical image domain. Thus learning can still be performed, and a model still can be generated irrelevant of the lack of training data. Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the one or more PCCT virtual images comprise at least one of virtual monoenergetic images, virtual non-contrast images (paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy.”), virtual iodine images, virtual pure lumen images, or ultra-high-resolution images. Regarding dependent claim 6, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the one or more PCCT virtual images comprises a plurality of PCCT virtual images (paragraph 0055, “In addition, spectral CT data may be acquired using a photon-counting scanner;” the virtual image is read as stored in a digital format; paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy;” paragraph 0014, “According to an embodiment of the invention, the CCTA data comprise conventional cardiac computed tomography angiography data, photon-counting based computed tomography angiography data, and/or spectral cardiac computed tomography angiography data with at least two energy levels.”) and training the machine learning based network for performing the medical imaging analysis task comprises: training the machine learning based network based on a multi-channel image comprising the plurality of PCCT virtual images (PCCT images as read as multi-channel images in that they obtain data from multiple energy bins in that each channel corresponds to data from each energy range; paragraph 0055, “In addition, spectral CT data may be acquired using a photon-counting scanner;”). Regarding dependent claim 7, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 discloses wherein the one or more PCCT virtual images comprises a plurality of PCCT virtual images (paragraph 0055, “In addition, spectral CT data may be acquired using a photon-counting scanner;” the virtual image is read as stored in a digital format; paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy;” paragraph 0014, “According to an embodiment of the invention, the CCTA data comprise conventional cardiac computed tomography angiography data, photon-counting based computed tomography angiography data, and/or spectral cardiac computed tomography angiography data with at least two energy levels.”). EP ‘784 and Taher fails to explicitly disclose as further recited. However, Taher discloses pre-training the machine learning based network based on the non-photon-counting data comprises pre-training the machine learning based network based on a (paragraph 0054, “This is a sequential pre-training approach in which a model is first pre-trained on a massive general dataset, such as ImageNet, and then pre-trained on domain-specific datasets, resulting in domain-adapted pre-trained models;” paragraph 0040, “coarse-grained natural image datasets, such as ImageNet;” ImageNet is read as a non-photon-counting data; paragraph 0024, “Furthermore, devised and disclose herein is a practical approach to bridge the domain gap between natural and medical images by continually pre-training supervised ImageNet models on medical images”), and fine-tuning the pre-trained machine learning based network based on the one or more PCCT virtual images comprises fine-tuning the pre-trained machine learning based network based on (paragraph 0054, “This is a sequential pre-training approach in which a model is first pre-trained on a massive general dataset, such as ImageNet, and then pre-trained on domain-specific datasets, resulting in domain-adapted pre-trained models;” domain-specific data set is read as the specific PCCT virtual images which are a type of medical image). With regard to multi-channel images, first off, a PCCT image is known to be a multi-channel image in that the PCCT images resolve into energy bins (i.e. different channels). Thus, it would have been obvious when making a virtual image of the PCCT image, to also be multi-channel. Additionally, multi-channel images of general images are well known such as RGB images. It is well known to one of ordinary skill in the art before the effective filing date that multi-channel images provide a more detailed data set. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of EP ‘784 and Taher to include multi-channel images to obtain data with as much information as possible for training. Regarding dependent claim 8, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein training the machine learning based network for performing the medical imaging analysis task comprises: training the machine learning based network for performing a plurality of medical imaging analysis tasks (paragraph 0057, “The output data is a cardiac segmentation map 18 depicting multiple anatomical segments including myocardium segments;” outputting multiple anatomical segments is read as performing a plurality of image analysis tasks). Regarding independent claim 9, the rejection of claim 1 applies directly. Additionally, EP ‘784 further discloses An apparatus (paragraph 0044, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method.”) comprising: means for receiving one or more input medical images (see claim 1 analysis); means for performing a medical imaging analysis task based on the one or more input medical images using a trained machine learning based network (see claim 1 analysis); and means for outputting results of the medical imaging analysis task (see claim 1 analysis), wherein the trained machine learning based network is trained (see claim 1 analysis) by: receiving 1) PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device and 2) non-photon-counting data (see claim 1 analysis); generating one or more PCCT virtual images from the PCCT imaging data (see claim 1 analysis); training the machine learning based network for performing the medical imaging analysis task by (see claim 1 analysis); and pre-training the machine learning based network based on the non-photon-counting data (see claim 1 analysis), and fine-tuning the pre-trained machine learning based network based on the one or more PCCT virtual images (see claim 1 analysis); and outputting the trained machine learning based network (see claim 1 analysis). Regarding dependent claim 10, the rejection of claim 9 is incorporated herein. Additionally, the rejection of claim 2 applies directly. Regarding independent claim 14, the rejection of claim 1 applies directly. Additionally, EP ‘784 further discloses A non-transitory computer readable medium storing computer program instructions, the computer program instructions when executed by a processor cause the processor to perform operations (paragraph 0044-0045, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method. According to another aspect of the invention, a computer readable medium is provided having stored the program element.”) comprising: receiving one or more input medical images (see claim 1 analysis); performing a medical imaging analysis task based on the one or more input medical images using a trained machine learning based network (see claim 1 analysis); and outputting results of the medical imaging analysis task (see claim 1 analysis), wherein the trained machine learning based network is trained by (see claim 1 analysis): receiving 1) PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device and 2) non-photon-counting data (see claim 1 analysis); generating one or more PCCT virtual images from the PCCT imaging data (see claim 1 analysis); training the machine learning based network for performing the medical imaging analysis task by (see claim 1 analysis); pre-training the machine learning based network based on the non-photon-counting data (see claim 1 analysis), and fine tuning the pre-trained machine learning based network based on the one or more PCCT virtual images (see claim 1 analysis); and outputting the trained machine learning based network (see claim 1 analysis). Regarding dependent claim 15, the rejection of claim 14 is incorporated herein. Additionally, the rejection of claim 2 applies directly. Regarding dependent claim 16, the rejection of claim 14 is incorporated herein. Additionally, the rejection of claim 6 applies directly. Regarding dependent claim 17, the rejection of claim 14 is incorporated herein. Additionally, the rejection of claim 7 applies directly. Regarding dependent claim 18, the rejection of claim 14 is incorporated herein. Additionally, the rejection of claim 8 applies directly. Regarding independent claim 19, the rejection of claim 1 applies directly. Additionally, EP ‘784 discloses A computer-implemented (paragraph 0001, “The present invention relates to assessing coronary microvascular dysfunction, and in particular to a decision-support system and a method for providing coronary microvascular dysfunction assessment.;” paragraph 0044-0045, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method. According to another aspect of the invention, a computer readable medium is provided having stored the program element;” see claim 1 analysis) method comprising: receiving PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (see claim 1 analysis); generating one or more PCCT virtual images from the PCCT imaging data (see claim 1 analysis); training a machine learning based network for performing a medical imaging analysis task by (see claim 1 analysis); pre-training the machine learning based network based on the non-photon-counting data (see claim 1 analysis), and fine-tuning the pre-trained machine learning based network based on the one or more PCCT virtual images (see claim 1 analysis); and outputting the trained machine learning based network (see claim 1 analysis). Regarding dependent claim 20, the rejection of claim 19 is incorporated herein. Additionally, EP ‘784 in the combination further discloses A non-transitory computer readable medium storing computer program instructions, the computer program instructions when executed by a processor cause the processor to perform the steps of claim 19 (see claim 19 analysis; paragraph 0044-0045, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method. According to another aspect of the invention, a computer readable medium is provided having stored the program element.”). Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over EP ‘784 further in view of Taher as applied to claim 9 above, and further in view of U.S. Publication No. 2018/0018757 to Suzuki (hereinafter Suzuki). Regarding dependent claim 11, the rejection of claim 9 is incorporated herein. Additionally, EP ‘784 and Taher fails to explicitly disclose wherein training the machine learning based network for performing the medical imaging analysis task based on the one or more PCCT virtual images comprises: training the machine learning based network based only on the one or more PCCT virtual images. However, Suzuki discloses wherein training the machine learning based network for performing the medical imaging analysis task based on the one or more PCCT virtual images comprises: training the machine learning based network based only on the one or more PCCT virtual images (abstract, “ The machine learning model is trained with matched pairs of projection images, namely, input lower-quality (lower-dose) projection images together with corresponding desired higher-quality (higher-dose) projection images. Through the training, the machine learning model learns to transform lower-quality (lower-dose) projection images to higher-quality (higher-dose) projection images.;” paragraph 0119, “Projection images discussed here may be projection images taken on a medical, industrial, security, and military x-ray computed tomography (CT) system, a CT system with a photon counting detector, a CT system with a flat-panel detector, a CT system with single or multiple raw detector, a limited angle x-ray tomography system such as a tomosynthesis, a positron emission tomography (PET) system, a single photon emission computed tomography (SPECT) system, a magnetic resonance imaging (MRI) system, an ultrasound (US) imaging system, an optical coherent tomography system, or their combination.”). As noted above, EP ‘784 and Taher are directed toward similar methods of endeavor of image processing in the medical imaging field. Further, EP ‘784 is directed toward “The present invention relates to assessing coronary microvascular dysfunction, and in particular to a decision-support system and a method for providing coronary microvascular dysfunction assessment (paragraph 0001).” Suzuki is directed toward “The invention relates generally to the field of tomography and more particularly to techniques, methods, systems, and computer programs for transforming lower quality projection images into higher quality projection images in computed tomography, including but not limited to lower-dose projection images into simulated higher-dose projection images to reconstruct simulated higher-dose tomography images (paragraph 0002).” It can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention that EP ‘784, Taher and Suzuki are all directed toward similar methods of endeavor of image processing in the medical imaging field. Further, one of ordinary skill in the art is well aware there is not always training data available for multiple types of data sets, to aid in training. Thus, had there not been additional data for multi-modal training, it would have been obvious to a person having ordinary skill in the art at the time the claimed invention was filed to incorporate the teaching of Suzuki in order to ensure a system can still be trained with only the limited data present. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jul 03, 2023
Application Filed
Nov 03, 2025
Non-Final Rejection — §103
Jan 16, 2026
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month