Prosecution Insights
Last updated: April 19, 2026
Application No. 18/346,291

AI-BASED MEDICAL IMAGING ANALYSIS OF PHOTON COUNTING DATA

Non-Final OA §103
Filed
Jul 03, 2023
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 14, 2025 and December 3, 2025 has been entered. Response to Amendment Claims 1, 8 and 15 have been amended changing the scope and contents of the claim. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 8 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action (see below). “means for receiving PCCT” in claim 8 “means for generating a plurality of PCCT virtual images” in claim 8 “means for performing a plurality of medical imaging analysis” in claim 8 “means for combining results” in claim 8 “means for outputting the results” in claim 8 “means for combining the results” in claim 10 “means for combining the results” in claim 11 “means for combining the results” in claim 12 Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over EP3695784 (hereinafter EP ‘784), and further in view of WO 2014/192935 (a machine translation obtained from Google Patents; hereinafter WO ‘935) and CN110879949B (a machine translation obtained from Google Patents; hereinafter CN ‘949). Regarding independent claim 1, EP ‘784 discloses A computer-implemented method (paragraph 0074, “In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.”) comprising: receiving PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.”); generating a plurality of PCCT virtual images (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.” Virtual image is read as stored in a digital format); performing a plurality of medical imaging analysis sub-tasks (paragraph 0057, “One possible segmentation method that may be employed in step 120 is a deep neural network 10 as illustrated in Figs. 2A to 2C. More specifically, Fig. 2A illustrates an example of the deep neural network 10. The input data of the deep neural network 10 may comprise the CCTA data 12, the scan protocol 14 and further optional patient-related data 16, such as patient medical records and clinical data. The output data is a cardiac segmentation map 18 depicting multiple anatomical segments including myocardium segments;” paragraph 0061, “Machine learning models may be used in step 130 to extract the perfusion data. In some embodiments, the machine learning models utilized herein may employ deep learning methods, such as a feedforward neural network or a generative adversarial deep-learning architecture.”); combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28. The indicator may be a CMD score. Such a statistical model may be described as a function 34 as illustrated in Fig. 4 that maps the patient_features onto CMD score 36:f(patient_features) → CMD score where the patient_features may include, but not limited to, the CCTA data along with the scan protocol and parameters described in step 110, the 3-D anatomical model described in step 120, the perfusion data described in step 130, and any additional patient data, such as clinical data and patient medical records. The function f(patient_features) may be used to estimate the CCTA-based CMD score by determining the patient_features as described in steps 110, 120, 130 and applying the function f(patient_features) on these patient_features.”); and outputting the results of the medical imaging analysis task (paragraph 0073, “The output unit 250 is configured to output the indicator for a presence of CMD. As an option, the decision-support system 200 may further comprise a display (not shown) configured to display the indicator for a presence of CMD.”). EP ‘784 fails to explicitly disclose as further recited. However, WO ‘935 discloses generating a plurality of PCCT virtual images based on weighting and combining different energy bins available in the PCCT imaging data (abstract, “a combining unit (117) which selects at least two energy bins to be combined on the basis of the numbers of the X-ray photons in the respective energy bins, and combines the numbers of the X-ray photos in the selected energy bins to thereby acquire a combined output signal in a combined energy bin obtained by combining the selected energy bins; and a reconstruction unit (114) which reconstructs an image using the combined output signal;” see also claim 5; page 8, “ combined energy bins by combining the numbers of X-ray photons belonging to different energy bins. At this time, the reconstruction device 114 reconstructs an image using the two combined output signals.”) . EP ‘784 is directed toward, “assessing coronary microvascular dysfunction” and, “Coronary computed tomography (CT) angiography may be performed to acquire the CCTA data representing a heart or coronary region of the patient. Other CCTA data, such as dual energy or photon counting data may be acquired for the CMD assessment (paragraph 0009).” WO ‘935 is directed toward “A photon-counting X-ray computed tomography device according to the present embodiment (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, both EP ‘784 and WO ‘935 are directed toward processing PCCT image data. Further, WO ‘935 allows for noise reduction (page 4). It can be easily conceived by one of ordinary skill in the art at the time of filing the claimed invention that noise in medical images can lead to inaccurate diagnoses, and thus poor patient outcomes. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of WO ‘935 in order to ensure the most optimal output image, leading to a more accurate diagnosis, and better patient outcomes. EP ‘784 and WO ‘935 in the combination as a whole fail to explicitly disclose as further recited. However, CN ‘949 discloses wherein each of the plurality of medical imaging analysis sub-tasks is performed based on one or more different (NOTE: based on “one or more” only one image to be processed is required) (abstract, “ wherein the fusion neural network comprises a shared network layer and at least two task network layers, the at least two task network layers correspond to at least two types of image processing tasks”… “ and processing the characteristics respectively based on the at least two task network layers to obtain at least two image processing results respectively corresponding to the at least two types of image processing tasks. Based on the embodiment of the application, the method and the device realize the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks.”). With respect to PCCT virtual images, CN ‘949 is not image modality specific. As such, one of ordinary skill in the art before the effective filing date would be well aware of how to modify CN ‘949 in order to input PCCT virtual images into the system. As noted above, EP ‘784 and WO ‘935 are directed toward similar methods of endeavor of processing PCCT images for diagnosis. Further, CN ‘949 is directed toward “image processing and network generation method and device based on a fusion neural network (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, EP ‘784, WO ‘935 and CN ‘949 are directed toward image processing. Further, one of ordinary skill in the art before the effective filing date would understand processing speed is of utmost importance when processing medical data so that treatments can be initiated as soon as possible. CN ‘949 allows for “the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks (abstract).” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of CN ‘949 to ensure a result is obtained as quickly as possible, leading to faster treatment and improved patient outcomes. Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the plurality of PCCT virtual images comprise at least one of virtual monoenergetic images, virtual non-contrast images, virtual iodine images, virtual pure lumen images, or ultra-high-resolution images (paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired by a photon-counting scanner.”). Regarding dependent claim 3, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a statistical weighting of the results of the medical imaging analysis sub-tasks (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28.”… “ Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” weighting is utilized within the neural networks). Regarding dependent claim 4, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a learned weighting of the results of the medical imaging analysis sub-tasks, the learned weighting learned based on ground truth data (paragraph 0066, “ In other words, the function f(patient_features) implicitly describes the statistical relation between the input features and the output CCTA-based CMD score. Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” paragraph 0067, “To be able to use the statistical model, the model may be trained a priori offline. For example, multiple pairs of inputs, namely patient_features, and outputs, namely the CMD score, known as training data may be used to find the function f(patient_features) using some optimization criteria. ”). Regarding dependent claim 6, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the medical imaging analysis task comprises automatic reporting of a coronary artery stenosis and the plurality of medical imaging analysis sub-tasks comprises at least one of detection of coronary artery centerlines, detection of stenoses, grading of stenoses, labelling of vessel segments, or detection of lumen and plaque (detection of lumen and plaque: paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”). Regarding dependent claim 7, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784 further discloses wherein the medical imaging analysis task comprises at least one of comprise automated CAD-RADS (coronary artery disease reporting and data system) scoring, detection and quantification of coronary plaque and fat (detection of fat and plaque: paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”), computation of CT-FFR (computed tomography fractional flow reserve) (paragraph 0065, “Fig. 3B illustrates a representative short-axis view of a CCTA scan of a patient with known functionally obstructive CAD at the left anterior descending artery with invasive fractional flow reserve (FFR) less than 0.8, ”), detection of stent and quantification of in-stent restenosis, or detection of bypass graft and assessment of graft patency and the plurality of medical imaging analysis sub-tasks comprises at least one of coronary centerline tracing, lesion detection, segment labeling (paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”), lumen and outer wall segmentation (paragraph 0010, “ The cardiac anatomy segmentation unit may determine different anatomical structures within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding percoronary fat.”), or quantification of plaque components (paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”). Regarding independent claim 8, the rejection of claim 1 applies directly. Additionally, EP 784 in the combination further discloses An apparatus (paragraph 0054, “In order to improve the effectiveness of CMD assessment, the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatus related to the personalized assessment of the presence of CMD using CCTA data and a statistical model. ”) comprising: means for receiving PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.”); means for generating a plurality of PCCT virtual images (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.” Virtual image is read as stored in a digital format); means for performing a plurality of medical imaging analysis sub-tasks (paragraph 0057, “One possible segmentation method that may be employed in step 120 is a deep neural network 10 as illustrated in Figs. 2A to 2C. More specifically, Fig. 2A illustrates an example of the deep neural network 10. The input data of the deep neural network 10 may comprise the CCTA data 12, the scan protocol 14 and further optional patient-related data 16, such as patient medical records and clinical data. The output data is a cardiac segmentation map 18 depicting multiple anatomical segments including myocardium segments;” paragraph 0061, “Machine learning models may be used in step 130 to extract the perfusion data. In some embodiments, the machine learning models utilized herein may employ deep learning methods, such as a feedforward neural network or a generative adversarial deep-learning architecture.”); means for combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28. The indicator may be a CMD score. Such a statistical model may be described as a function 34 as illustrated in Fig. 4 that maps the patient_features onto CMD score 36:f(patient_features) → CMD score where the patient_features may include, but not limited to, the CCTA data along with the scan protocol and parameters described in step 110, the 3-D anatomical model described in step 120, the perfusion data described in step 130, and any additional patient data, such as clinical data and patient medical records. The function f(patient_features) may be used to estimate the CCTA-based CMD score by determining the patient_features as described in steps 110, 120, 130 and applying the function f(patient_features) on these patient_features.”); and means for outputting the results of the medical imaging analysis task (paragraph 0073, “The output unit 250 is configured to output the indicator for a presence of CMD. As an option, the decision-support system 200 may further comprise a display (not shown) configured to display the indicator for a presence of CMD.”). EP ‘784 fails to explicitly disclose as further recited. However, WO ‘935 discloses means for generating a plurality of PCCT virtual images based on weighting and combining different energy bins available in the PCCT imaging data (abstract, “a combining unit (117) which selects at least two energy bins to be combined on the basis of the numbers of the X-ray photons in the respective energy bins, and combines the numbers of the X-ray photos in the selected energy bins to thereby acquire a combined output signal in a combined energy bin obtained by combining the selected energy bins; and a reconstruction unit (114) which reconstructs an image using the combined output signal;” see also claim 5; page 8, “ combined energy bins by combining the numbers of X-ray photons belonging to different energy bins. At this time, the reconstruction device 114 reconstructs an image using the two combined output signals.”) . EP ‘784 is directed toward, “assessing coronary microvascular dysfunction” and, “Coronary computed tomography (CT) angiography may be performed to acquire the CCTA data representing a heart or coronary region of the patient. Other CCTA data, such as dual energy or photon counting data may be acquired for the CMD assessment (paragraph 0009).” WO ‘935 is directed toward “A photon-counting X-ray computed tomography device according to the present embodiment (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, both EP ‘784 and WO ‘935 are directed toward processing PCCT image data. Further, WO ‘935 allows for noise reduction (page 4). It can be easily conceived by one of ordinary skill in the art at the time of filing the claimed invention that noise in medical images can lead to inaccurate diagnoses, and thus poor patient outcomes. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of WO ‘935 in order to ensure the most optimal output image, leading to a more accurate diagnosis, and better patient outcomes. EP ‘784 and WO ‘935 in the combination as a whole fail to explicitly disclose as further recited. However, CN ‘949 discloses wherein each of the plurality of medical imaging analysis sub-tasks is performed based on one or more different (NOTE: based on “one or more” only one image to be processed is required) images (abstract, “ wherein the fusion neural network comprises a shared network layer and at least two task network layers, the at least two task network layers correspond to at least two types of image processing tasks”… “ and processing the characteristics respectively based on the at least two task network layers to obtain at least two image processing results respectively corresponding to the at least two types of image processing tasks. Based on the embodiment of the application, the method and the device realize the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks.”). With respect to PCCT virtual images, CN ‘949 is not image modality specific. As such, one of ordinary skill in the art before the effective filing date would be well aware of how to modify CN ‘949 in order to input PCCT virtual images into the system. As noted above, EP ‘784 and WO ‘935 are directed toward similar methods of endeavor of processing PCCT images for diagnosis. Further, CN ‘949 is directed toward “image processing and network generation method and device based on a fusion neural network (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, EP ‘784, WO ‘935 and CN ‘949 are directed toward image processing. Further, one of ordinary skill in the art before the effective filing date would understand processing speed is of utmost importance when processing medical data so that treatments can be initiated as soon as possible. CN ‘949 allows for “the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks (abstract).” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of CN ‘949 to ensure a result is obtained as quickly as possible, leading to faster treatment and improved patient outcomes. Regarding dependent claim 9, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the plurality of PCCT virtual images comprise at least one of virtual monoenergetic images, virtual non-contrast images, virtual iodine images, virtual pure lumen images, or ultra-high-resolution images (paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired by a photon-counting scanner.”). Regarding dependent claim 10, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the means for combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: means for combining the results of the medical imaging analysis sub-tasks based on a statistical weighting of the results of the medical imaging analysis sub-tasks (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28.”… “ Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” weighting is utilized within the neural networks). Regarding dependent claim 11, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the means for combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: means for combining the results of the medical imaging analysis sub-tasks based on a learned weighting of the results of the medical imaging analysis sub-tasks, the learned weighting learned based on ground truth data (paragraph 0066, “ In other words, the function f(patient_features) implicitly describes the statistical relation between the input features and the output CCTA-based CMD score. Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” paragraph 0067, “To be able to use the statistical model, the model may be trained a priori offline. For example, multiple pairs of inputs, namely patient_features, and outputs, namely the CMD score, known as training data may be used to find the function f(patient_features) using some optimization criteria. ”). Regarding dependent claim 13, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the medical imaging analysis task comprises automatic reporting of a coronary artery stenosis and the plurality of medical imaging analysis sub-tasks comprises at least one of detection of coronary artery centerlines, detection of stenoses, grading of stenoses, labelling of vessel segments, or detection of lumen and plaque (detection of lumen and plaque: paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”). Regarding dependent claim 14, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784 further discloses wherein the medical imaging analysis task comprises at least one of comprise automated CAD-RADS (coronary artery disease reporting and data system) scoring, detection and quantification of coronary plaque and fat (detection of fat and plaque: paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”), computation of CT-FFR (computed tomography fractional flow reserve) (paragraph 0065, “Fig. 3B illustrates a representative short-axis view of a CCTA scan of a patient with known functionally obstructive CAD at the left anterior descending artery with invasive fractional flow reserve (FFR) less than 0.8, ”), detection of stent and quantification of in-stent restenosis, or detection of bypass graft and assessment of graft patency and the plurality of medical imaging analysis sub-tasks comprises at least one of coronary centerline tracing, lesion detection, segment labeling (paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”), lumen and outer wall segmentation (paragraph 0010, “ The cardiac anatomy segmentation unit may determine different anatomical structures within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding percoronary fat.”), or quantification of plaque components (paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”). Regarding independent claim 15, the rejection of claim 1 applies directly. Additionally, EP 784 in the combination further discloses A non-transitory computer readable medium storing computer program instructions, the computer program instructions when executed by a processor cause the processor to perform operations (paragraph 0044-0045, “According to another aspect of the invention, a computer program element is provided for controlling an apparatus according to one of the embodiments described above and in the following, which, when being executed by a processing unit, is adapted to perform the inventive method. According to another aspect of the invention, a computer readable medium is provided having stored the program element;” paragraph 0074-0075, “In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system. The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. ”) comprising: receiving PCCT (photon counting computed tomography) imaging data acquired from a PCCT imaging device (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.”); generating a plurality of PCCT virtual images (paragraph 0055, “Fig. 1 illustrates a flow diagram of a method 100 for providing CMD assessment according to some embodiments of the present disclosure. In step 110, as illustrated in Fig. 2A, CCTA data 12 of a patient and a corresponding scan protocol 14 are provided. The CCTA data12 may include at least either conventional CCTA data or spectral CCTA data, including but not limited to photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. The spectral CCTA data may comprise at least two energy levels that allow spectral analysis. The CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired using a photon-counting scanner.” Virtual image is read as stored in a digital format); performing a plurality of medical imaging analysis sub-tasks (paragraph 0057, “One possible segmentation method that may be employed in step 120 is a deep neural network 10 as illustrated in Figs. 2A to 2C. More specifically, Fig. 2A illustrates an example of the deep neural network 10. The input data of the deep neural network 10 may comprise the CCTA data 12, the scan protocol 14 and further optional patient-related data 16, such as patient medical records and clinical data. The output data is a cardiac segmentation map 18 depicting multiple anatomical segments including myocardium segments;” paragraph 0061, “Machine learning models may be used in step 130 to extract the perfusion data. In some embodiments, the machine learning models utilized herein may employ deep learning methods, such as a feedforward neural network or a generative adversarial deep-learning architecture.”); combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28. The indicator may be a CMD score. Such a statistical model may be described as a function 34 as illustrated in Fig. 4 that maps the patient_features onto CMD score 36:f(patient_features) → CMD score where the patient_features may include, but not limited to, the CCTA data along with the scan protocol and parameters described in step 110, the 3-D anatomical model described in step 120, the perfusion data described in step 130, and any additional patient data, such as clinical data and patient medical records. The function f(patient_features) may be used to estimate the CCTA-based CMD score by determining the patient_features as described in steps 110, 120, 130 and applying the function f(patient_features) on these patient_features.”); and outputting the results of the medical imaging analysis task (paragraph 0073, “The output unit 250 is configured to output the indicator for a presence of CMD. As an option, the decision-support system 200 may further comprise a display (not shown) configured to display the indicator for a presence of CMD.”). EP ‘784 fails to explicitly disclose as further recited. However, WO ‘935 discloses generating a plurality of PCCT virtual images based on weighting and combining different energy bins available in the PCCT imaging data (abstract, “a combining unit (117) which selects at least two energy bins to be combined on the basis of the numbers of the X-ray photons in the respective energy bins, and combines the numbers of the X-ray photos in the selected energy bins to thereby acquire a combined output signal in a combined energy bin obtained by combining the selected energy bins; and a reconstruction unit (114) which reconstructs an image using the combined output signal;” see also claim 5; page 8, “ combined energy bins by combining the numbers of X-ray photons belonging to different energy bins. At this time, the reconstruction device 114 reconstructs an image using the two combined output signals.”) . EP ‘784 is directed toward, “assessing coronary microvascular dysfunction” and, “Coronary computed tomography (CT) angiography may be performed to acquire the CCTA data representing a heart or coronary region of the patient. Other CCTA data, such as dual energy or photon counting data may be acquired for the CMD assessment (paragraph 0009).” WO ‘935 is directed toward “A photon-counting X-ray computed tomography device according to the present embodiment (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, both EP ‘784 and WO ‘935 are directed toward processing PCCT image data. Further, WO ‘935 allows for noise reduction (page 4). It can be easily conceived by one of ordinary skill in the art at the time of filing the claimed invention that noise in medical images can lead to inaccurate diagnoses, and thus poor patient outcomes. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of WO ‘935 in order to ensure the most optimal output image, leading to a more accurate diagnosis, and better patient outcomes. EP ‘784 and WO ‘935 in the combination as a whole fail to explicitly disclose as further recited. However, CN ‘949 discloses wherein each of the plurality of medical imaging analysis sub-tasks is performed based on one or more different (NOTE: based on “one or more” only one image to be processed is required) (abstract, “ wherein the fusion neural network comprises a shared network layer and at least two task network layers, the at least two task network layers correspond to at least two types of image processing tasks”… “ and processing the characteristics respectively based on the at least two task network layers to obtain at least two image processing results respectively corresponding to the at least two types of image processing tasks. Based on the embodiment of the application, the method and the device realize the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks.”). With respect to PCCT virtual images, CN ‘949 is not image modality specific. As such, one of ordinary skill in the art before the effective filing date would be well aware of how to modify CN ‘949 in order to input PCCT virtual images into the system. As noted above, EP ‘784 and WO ‘935 are directed toward similar methods of endeavor of processing PCCT images for diagnosis. Further, CN ‘949 is directed toward “image processing and network generation method and device based on a fusion neural network (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, EP ‘784, WO ‘935 and CN ‘949 are directed toward image processing. Further, one of ordinary skill in the art before the effective filing date would understand processing speed is of utmost importance when processing medical data so that treatments can be initiated as soon as possible. CN ‘949 allows for “the simultaneous processing of at least two types of image processing tasks and improve the processing speed of the image processing tasks (abstract).” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of CN ‘949 to ensure a result is obtained as quickly as possible, leading to faster treatment and improved patient outcomes. Regarding dependent claim 16, the rejection of claim 15 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the plurality of PCCT virtual images comprise at least one of virtual monoenergetic images, virtual non-contrast images, virtual iodine images, virtual pure lumen images, or ultra-high-resolution images (paragraph 0015, “The CCTA data may include, but not limited to, photoelectric and scatter data, mono-energetic images, iodine and water maps, virtual non-contrast images, and z-effective maps. Spectral CCTA images of the heart reconstructed from CT projection data may be acquired with dual-layer detector system that separate the X-ray flux at the detector into two levels of energy. In addition, spectral CT data may be acquired by a photon-counting scanner.”). Regarding dependent claim 17, the rejection of claim 15 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a statistical weighting of the results of the medical imaging analysis sub-tasks (paragraph 0066, “Returning to Fig. 1, in step 140, a statistical model is used to determine an indicator for a presence of CMD based on the received CCTA data 12, the corresponding scan protocol 14, and patient-related data including the generated cardiac segmentation map 18 and the extracted perfusion data 28.”… “ Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” weighting is utilized within the neural networks). Regarding dependent claim 18, the rejection of claim 15 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a learned weighting of the results of the medical imaging analysis sub-tasks, the learned weighting learned based on ground truth data (paragraph 0066, “ In other words, the function f(patient_features) implicitly describes the statistical relation between the input features and the output CCTA-based CMD score. Various possible machine learning models may be employed herein to find the function f(patient_features) using e.g. a supervised learning methodology. Examples of the machine learning models to describe and find the function f(patient_features) may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines;” paragraph 0067, “To be able to use the statistical model, the model may be trained a priori offline. For example, multiple pairs of inputs, namely patient_features, and outputs, namely the CMD score, known as training data may be used to find the function f(patient_features) using some optimization criteria. ”). Regarding dependent claim 20, the rejection of claim 15 is incorporated herein. Additionally, EP ‘784 in the combination further discloses wherein the medical imaging analysis task comprises automatic reporting of a coronary artery stenosis and the plurality of medical imaging analysis sub-tasks comprises at least one of detection of coronary artery centerlines, detection of stenoses, grading of stenoses, labelling of vessel segments, or detection of lumen and plaque (detection of lumen and plaque: paragraph 0056, “In step 120, the received CCTA data are segmented to generate a cardiac segmentation map depicting multiple anatomical segments including myocardium segments. The cardiac segmentation map may depict multiple anatomical segments within the heart, including the myocardium, the left and right ventricles, the left and right atriums, the ascending and descending aorta, the epicardial fat, the different coronary arteries lumen, wall, plaque, and the surrounding pericoronary fat”). Claim(s) 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over EP ‘784 further in view of WO ‘935 and CN ‘949 as applied to claims 1, 8 and 15 respectively above, and further in view of Arritt, Robert P., and Roy M. Turner. "Context-sensitive weights for a neural network." International and Interdisciplinary Conference on Modeling and Using Context. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. (hereinafter Arritt). Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, EP ‘784, WO ‘935 and CN ‘949 in the combination as a whole fails to explicitly disclose wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks. However, Arritt discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks (abstract, “This paper presents a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights. Instead of having a very large network that itself must take context into account, our approach uses one or more small networks whose weights are associated with symbolic representations of contexts an agent may encounter. When the context-management system determines what the current context is, it sets the networks’ weights appropriately for the context.”). As noted above, EP ‘784, WO ‘935 and CN ‘949 are directed toward similar methods of endeavor of processing PCCT images. Further, EP ‘784 is directed toward “The coronary microvascular dysfunction prediction unit is configured to use a statistical model to determine an indicator for a presence of coronary microvascular dysfunction based on the received coronary computed tomography angiography data, the corresponding scan protocol, and patient-related data including the generated cardiac segmentation map and the extracted perfusion data. The output unit is configured to output the indicator for a presence of coronary microvascular dysfunction (abstract)” and “The statistical model may be a machine learning model used to find the function e.g. using the supervised-learning methodology. Examples of the machine learning models may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines (paragraph 0007).” Arritt is directed toward “a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights (abstract).” As can be easily seen by one of ordinary skill in the art, EP ‘784, WO ‘935, CN ‘949 and Arritt are directed toward similar methods of endeavor of neural network use. Further, EP ‘784 allows for optimization of the model at paragraph 0067, “Optimization can be performed using standard techniques including but not limited to the stochastic gradient decent algorithm, among others.” As noted in Arritt, the use of the context-sensitive weights allows for smaller networks (abstract) which is seen as optimization of system cost (larger networks need more powerful computers). Further, Arritt page 2 notes how lacking context, can output inaccurate results, “A problem arises though, in real-world situations where linguistic values are highly context-dependent. For example, an underwater agent may convert a depth of 5 meters into TOO DEEP while in a harbor, but later, when it finds itself in the open ocean, 5 meters may be classified as NOMINAL.” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Arritt to allow an optimized system to output more accurate results. Regarding dependent claim 12, the rejection of claim 8 is incorporated herein. Additionally, EP ‘784, WO ‘935 and CN ‘949 in the combination as a whole fails to explicitly disclose wherein the means for combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: means for combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks. However, Arritt discloses wherein the means for combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: means for combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks (abstract, “This paper presents a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights. Instead of having a very large network that itself must take context into account, our approach uses one or more small networks whose weights are associated with symbolic representations of contexts an agent may encounter. When the context-management system determines what the current context is, it sets the networks’ weights appropriately for the context.”). As noted above, EP ‘784, WO ‘935 and CN ‘949 are directed toward similar methods of endeavor of processing PCCT images. Further, EP ‘784 is directed toward “The coronary microvascular dysfunction prediction unit is configured to use a statistical model to determine an indicator for a presence of coronary microvascular dysfunction based on the received coronary computed tomography angiography data, the corresponding scan protocol, and patient-related data including the generated cardiac segmentation map and the extracted perfusion data. The output unit is configured to output the indicator for a presence of coronary microvascular dysfunction (abstract)” and “The statistical model may be a machine learning model used to find the function e.g. using the supervised-learning methodology. Examples of the machine learning models may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines (paragraph 0007).” Arritt is directed toward “a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights (abstract).” As can be easily seen by one of ordinary skill in the art, EP ‘784, WO ‘935, CN ‘949 and Arritt are directed toward similar methods of endeavor of neural network use. Further, EP ‘784 allows for optimization of the model at paragraph 0067, “Optimization can be performed using standard techniques including but not limited to the stochastic gradient decent algorithm, among others.” As noted in Arritt, the use of the context-sensitive weights allows for smaller networks (abstract) which is seen as optimization of system cost (larger networks need more powerful computers). Further, Arritt page 2 notes how lacking context, can output inaccurate results, “A problem arises though, in real-world situations where linguistic values are highly context-dependent. For example, an underwater agent may convert a depth of 5 meters into TOO DEEP while in a harbor, but later, when it finds itself in the open ocean, 5 meters may be classified as NOMINAL.” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Arritt to allow an optimized system to output more accurate results. Regarding dependent claim 19, the rejection of claim 15 is incorporated herein. Additionally, EP ‘784, WO ‘935 an CN ‘949 in the combination as a whole fails to explicitly disclose wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks. However, Arritt discloses wherein combining results of the medical imaging analysis sub-tasks to generate results of a medical imaging analysis task comprises: combining the results of the medical imaging analysis sub-tasks based on a context based weighting of the results of the medical imaging analysis sub-tasks (abstract, “This paper presents a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights. Instead of having a very large network that itself must take context into account, our approach uses one or more small networks whose weights are associated with symbolic representations of contexts an agent may encounter. When the context-management system determines what the current context is, it sets the networks’ weights appropriately for the context.”). As noted above, EP ‘784, WO ‘935 and CN ‘949 are directed toward similar methods of endeavor of processing PCCT images. Further, EP ‘784 is directed toward “The coronary microvascular dysfunction prediction unit is configured to use a statistical model to determine an indicator for a presence of coronary microvascular dysfunction based on the received coronary computed tomography angiography data, the corresponding scan protocol, and patient-related data including the generated cardiac segmentation map and the extracted perfusion data. The output unit is configured to output the indicator for a presence of coronary microvascular dysfunction (abstract)” and “The statistical model may be a machine learning model used to find the function e.g. using the supervised-learning methodology. Examples of the machine learning models may include, but not limited to, deep neural networks, regression forests, random forests, and/or support vector machines (paragraph 0007).” Arritt is directed toward “a technique for making neural networks context-sensitive by using a symbolic context-management system to manage their weights (abstract).” As can be easily seen by one of ordinary skill in the art, EP ‘784, WO ‘935, CN ‘949 and Arritt are directed toward similar methods of endeavor of neural network use. Further, EP ‘784 allows for optimization of the model at paragraph 0067, “Optimization can be performed using standard techniques including but not limited to the stochastic gradient decent algorithm, among others.” As noted in Arritt, the use of the context-sensitive weights allows for smaller networks (abstract) which is seen as optimization of system cost (larger networks need more powerful computers). Further, Arritt page 2 notes how lacking context, can output inaccurate results, “A problem arises though, in real-world situations where linguistic values are highly context-dependent. For example, an underwater agent may convert a depth of 5 meters into TOO DEEP while in a harbor, but later, when it finds itself in the open ocean, 5 meters may be classified as NOMINAL.” Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Arritt to allow an optimized system to output more accurate results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Publication No. 2020/0074629 to Zur discloses, “The system applies preprocessing tools to clean the received images and then applies in parallel a plurality of detectors both conventional detectors and models of supervised machine learning-based detectors (abstract)” U.S. Patent No. 11,556,784 to Luo et al. discloses, “Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs (abstract)” Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jul 03, 2023
Application Filed
Jun 17, 2025
Non-Final Rejection — §103
Oct 07, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Dec 03, 2025
Response after Non-Final Action
Dec 14, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month