Prosecution Insights
Last updated: April 19, 2026
Application No. 18/776,651

DETECTION OF ARTIFACTS IN SYNTHETIC IMAGES

Non-Final OA §101§103
Filed
Jul 18, 2024
Examiner
VU, KHOA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BAYER AKTIENGESELLSCHAFT
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
234 granted / 345 resolved
+5.8% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§101 §103
yDETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 16 recites a computer-readable storage medium. The broadest reasonable interpretation of a claim drawn to a computer readable storage medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. 101 as covering both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable storage medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. $ I01 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. Applicant’s specification in paragraph [0002] recites “The term "computer-readable storage medium" may include a computer program, but is not limited to, memory, portable or fixed storage devices, optical storage devices, wireless channels, and various other mediums capable of storing, containing or carrying instruction(s) and/or data” Since Applicant’s disclosure does not limit the definition of “a machine-readable medium”, it could be a signal. As an additional note, a non-transitory computer readable medium having executable programming instructions stored thereon is considered statutory as non-transitory computer readable media excludes transitory data signals. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 15, 16 are rejected under 35 U.S.C. 103 as being unpatentable by Nett et al. (U.S. 2024/0296527 A1) in view of Hawkins-Daarud et al. (U.S. 2022/0148731 A1). Regarding Claim 1, Nett discloses a computer-implemented method (Nett, [0004] “a method for creating synthetic computed tomography (CT) images for training” comprising: providing a trained machine-learning model (MLMt) (Nett, [0024] “Once the model is trained, the model may be a neural network model, such as a convolutional neural network (CNN), the model could be a different type of artificial intelligence (AI), machine learning (ML)” Nett teaches providing a trained machine learning model such as a convolutional neural network (CNN); wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements (Nett, [0002] “a convolutional neural network (CNN) may be trained to reduce an amount the noise in the CT images. The CNN may be trained on image pairs including a first, noisy input image, and a second, noise-free target (ground truth) image” and [0005] The trained model may then be used to reduce noise in real CT images. The trained model may more accurately distinguish noise from the anatomical features of the synthetic CT images than real images” Nett teaches the trained machine-learning model has been trained based on the training data e.g., a convolutional neural network (CNN) may be trained to reduce an amount the noise in the CT images (training data). The CNN may be trained on data image pairs including a first an input image (a noisy input image) of reference region in the first state and a second, a target reference image (a noise-free target (ground truth) image of the reference region in the second state. Each include a plurality of image elements e.g., noise from the anatomical features of the synthetic CT images, wherein the at least one input reference image (RI1(xi)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object (Nett, [0005] “the reference images may be images acquired using magnetic resonance (MR) imaging, multi-energy CT imaging, positron emission tomography (PET) imaging, single photon computed tomography (SPECT) imaging” Nett teaches an input reference image includes a computed tomography (SPECT) imaging or magnetic resonance (MR) imaging) wherein the target reference image (RI2(yi)) is a computed tomography or magnetic resonance image of the reference region of the reference object (Nett, [0107] “receive a plurality of synthetic computed tomography (CT) images, each synthetic CT image generated by performing a tissue segmentation of a magnetic resonance (MR) scan of a subject, for each synthetic CT image of the plurality of synthetic CT images, create a respective plurality of image pairs, each image pair including a synthetic CT image as a target, ground truth image” Nett teaches a target reference image includes a computed tomography (SPECT) imaging or magnetic resonance (MR) imaging). wherein the machine-learning model (MLMt) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI1(xi)) a synthetic reference image (RI2*(y^i)) (Nett, [0004] “performing a tissue segmentation of reference images of an anatomical region of a subject to determine a set of different tissue types of the reference images; and generating synthetic CT images of the reference images by assigning CT image values of the synthetic CT images based on the different tissue types. The model may be trained on the synthetic CT images, the model is a neural network model” Nett teaches the machine-learning model (a neural network model) has been trained to generate for each reference object (a tissue segmentation) based on the reference image, a synthetic reference image. wherein the synthetic reference image (RI2*(y^i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(y^i)) respectively corresponds to an image element of the target reference image (RI2(yi)) (Nett, [0042] “FIG. 2B, the input image and the target image may be synthetic CT images generated from higher resolution reference images, the input image is a version of the target image with an amount of noise added to it. During training, noise reduction neural network 202 may learn to distinguish the added noise from anatomical features of the target image” Nett teaches the reference image includes image elements (amount of noise) corresponds to an image element of the target reference image (noise from anatomical features of the target image). wherein the machine-learning model (MLMt) has been trained to predict for each image element of the synthetic reference image (RI2*(y^i)) a color value (y^i) (Nett, [0104] “generating synthetic CT images based on using machine learning models, the synthetic CT images may be created based on a tissue segmentation process that may be manual, automated or partially manual and partially automated, and that does not rely on a fully trained prediction model” and [0106] “generating synthetic CT images of the reference images by assigning CT image values of the synthetic CT images based on the different tissue types” and [0059] “to generate a first set of synthetic CT data. In various embodiments, the image values may be voxel intensity values between 0.0 and 1.0, where an image value of 1.0 corresponds to a brightest (e.g., white) voxel, and an image value of 0.0 corresponds to a darkest (e.g., black) voxel” Nett teaches the machine-learning model has been trained to predict (partial predict) for each image element (tissue voxel) of synthetic reference image a color value (e.g. brightest white = 1, darkest = 0). wherein the training comprises minimization of a loss function (L) (Nett, [0069] “The noise reduction neural network may be configured to iteratively adjust one or more of the plurality of weights of the noise reduction neural network in order to minimize a loss function, based on an assessment of differences between the input image and the target image comprised by each image pair of the training image pairs” Nett teaches the training includes minimization of a loss function. receiving at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) represents the examination region of the examination object in the first state (Nett, [0081] “FIG. 6A shows a first MR image 600 of a brain of a subject acquired with a first pulse sequence, first anatomical structure 602 may comprise bone tissue, and second anatomical structure 604 may comprise soft brain tissue” and [0102] FIG. 8A shows an input image 800, where input image 800 is a CT image acquired from a subject during an examination, Specifically, input image 800 is a CT image of a brain of the subject, which includes noise” Nett teaches receiving an input image of an examination region of an examination object (e.g., bone tissue of brain, soft brain tissue, Fig. 6A) represents the examination region of the examination object in the first state (a CT image of a brain of the subject, which includes noise), wherein the at least one input image (I1(xi)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object (Nett, [0107] “receive a plurality of synthetic computed tomography (CT) images, each synthetic CT image generated by performing a tissue segmentation of a magnetic resonance (MR) scan of a subject” Nett teaches input image includes a synthetic computed tomography (CT) image or magnetic resonance image of the examination region), feeding the at least one input image (I1(xi)) to the trained machine-learning model (MLMt) (Nett, [0104] “approaches to generating synthetic CT images based on using machine learning models, the synthetic CT images described herein may be created based on a tissue segmentation process” Nett teaches feeding the input to the trained machine model; receiving a synthetic image (I2*( y^i)) from the trained machine-learning model, wherein the synthetic image (I2*( y^i)) represents the examination region of the examination object in the second state (Nett, [0102] “FIG. 8A shows an input image 800, where input image 800 is a CT image acquired from a subject during an examination. Specifically, input image 800 is a CT image of a brain of the subject, which includes noise. FIG. 8B shows a first output image 802. First output image 802 is outputted by a first trained noise reduction neural network based on input image 800 (e.g., where input image 800 is inputted into the first trained noise reduction neural network to generate output image 802)” Nett teaches receiving an input synthetic image of a brain of the subject which include noise (image 800) from a first trained noise reduction neural network which generates an output a noise-reduce image 802 in a second state; However, Nett does not explicitly teach an uncertainty value (σ^(xi)) for the predicted color value (y^i), and wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters; receiving an uncertainty value (σ^(xi)) for each image element of the synthetic image (I2*( y^i)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value. Hawkins-Daarud teaches an uncertainty value (σ^(xi)) for the predicted color value (y^i) (Hawkins-Daarud, [0007] “A trained machine learning model is accessed with a computer system, to generate genetic prediction data, and corresponding predictive uncertainty data from medical image data” and [0014] FIG. 5 “The sample predictions with lowest uncertainty (p<0.5) (blue curve, n=72) achieved the highest performance (AUC=0.86) compared to the entire grouped cohort irrespective of uncertainty (AUC=0.83) (black curve, n=95)” Hawkins-Daarud teaches an uncertainty value (lowest uncertainty (p<0.5)) for the predicted color value (blue curve, n=72) and wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters (Hawkins-Daarud, [0086] “FIG. 2, the machine learning model can be trained by optimizing model parameters based on minimizing a loss function, the loss function may be a mean squared error loss function” and [0118] The KGL model to generate a predictive distribution of the TCD for each biopsy sample of the particular patient. The predictive means of all the biopsy samples were compared with the true TCDs to compute the mean absolute prediction error (“MAPE”)” and [0120] FIG. 8 shows the predictive TCD maps from two patients as examples. Colors represent the predictive means of the TCD from 0% (darkest blue) to 100% (darkest red) and [0014] FIG. 5 “The sample predictions with lowest uncertainty (p<0.5) (blue curve, n=72) achieved the highest performance (AUC=0.86) compared to the entire grouped cohort irrespective of uncertainty (AUC=0.83) (black curve, n=95)” Hawkins-Daarud teaches a loss function includes the predicted color value (0% (darkest blue) to 100% (darkest red) as image element of the target reference image and the predicted uncertainty value (σ^(xi)) as parameters (lowest uncertainty (p<0.5) (blue curve, n=72); receiving an uncertainty value (σ^(xi)) for each image element of the synthetic image (I2*( y^i)) (Hawkins-Daarud, [0014] FIG. 5 “The sample predictions with lowest uncertainty (p<0.5) (blue curve, n=72) achieved the highest performance (AUC=0.86) compared to the entire grouped cohort irrespective of uncertainty (AUC=0.83) (black curve, n=95)” grouped cohort irrespective of uncertainty (AUC=0.83) (black curve, n=95)” Hawkins-Daarud teaches receiving an uncertainty value (σ^(xi)) as parameters (lowest uncertainty (p<0.5) (blue curve, n=72); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value (Hawkins-Daarud, [0006] “While predictive accuracy remains the most important measure of model performance, usually through probabilistic approaches—to enhance the credibility of model outputs and to facilitate subsequent decision-making. Past studies have focused on accuracy (e.g., sensitivity/specificity) and covariance (e.g., standard error, 95% confidence intervals) in group analyses” Hawkins-Daarud teaches determining a confidence value (95%) on the received uncertainty value (standard error, 5%) and outputting the confidence value by the credibility of model outputs. Nett and Hawkins-Daarud are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Nett to combine with an uncertainty value (σ^(xi)) for the predicted color value (as taught by Hawkins-Daarud ) in order to include an uncertainty value (σ^(xi)) for the predicted color value because Hawkins-Daarud can provide an uncertainty value (lowest uncertainty (p<0.5)) for the predicted color value (blue curve, n=72) (Hawkins-Daarud , [0007], [0014]). Doing so, it may provide quantifying these uncertainties will help to understand the conditions of optimal model performance (Hawkins-Daarud , [0006]). Regarding Claim 6, the method according to claim 1, Nett does not explicitly teach wherein the uncertainty value (σ^(xi)) of the predicted color value (yi) of each image element of the synthetic image (I2*(yi)) or a value derived therefrom is set as the confidence value of the image element. However, Hawkins-Daarud teaches the uncertainty value (σ^(xi)) of the predicted color value (yi) of each image element of the synthetic image (I2*(yi)) (Hawkins-Daarud [0014] FIG. 5 “The sample predictions with lowest uncertainty (p<0.5) (blue curve, n=72) achieved the highest performance (AUC=0.86) compared to the entire grouped cohort irrespective of uncertainty (AUC=0.83) (black curve, n=95)” Hawkins-Daarud teaches an uncertainty value (lowest uncertainty (p<0.5)) for the predicted color value (blue curve, n=72). Nett and Hawkins-Daarud are combinable see rationale in claim 1. Regarding Claim 15, a combination of Nett and Hawkins-Daarud discloses a computer system (Nett, [0028] “an image processing system” comprising: a receiving unit (Nett, [0043] “The noise generator 214 may receive the synthetic CT images” Nett teaches a receiving unit (a noise generator); a control and calculation unit (Nett, [0004] “computed tomography (CT) images”; and an output unit (Nett, [0064] “the noise generator may output a version of the synthetic CT image” Nett teaches an output unit (a noise generator); wherein the control and calculation unit is configured to: provide a trained machine-learning model (MLMt) (Nett, [0024] “a convolutional neural network (CNN), the model could be a different type of artificial intelligence (AI), machine learning (ML)” Nett teaches a trained machine learning model (a convolutional neural network (CNN); wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements, wherein the at least one input reference image (RI1(xi)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object, wherein the target reference image (RI2(yi)) is a computed tomography or magnetic resonance image of the reference region of the reference object, wherein the machine-learning model (MLMt) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI1(xi)) a synthetic reference image (RI2*(yi)), wherein the synthetic reference image (RI2*(yi)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(yi)) respectively corresponds to an image element of the target reference image (RI2(yi)), wherein the machine-learning model (MLMt) has been trained to predict for each image element of the synthetic reference image (RI2*(yi)) a color value (y^i) and an uncertainty value (y^(xi)) for the predicted color value (y^i), and wherein the training comprises minimization of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (y^(xi)) as parameters; cause the receiving unit to receive at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) represents an examination region of an examination object in the first state, wherein the at least one input image (I1(xi)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object; feed the at least one input image (I1(xi)) to a trained machine-learning model (MLMt); receive from the trained machine-learning model (MLMt) a synthetic image (I2*(?i)), wherein the synthetic image (I2*(yi)) represents the examination region of the examination object in the second state; receive from the trained machine-learning model (MLMt) an uncertainty value (y^(xi)) for each image element of the synthetic image (I2*(?i)); determine at least one confidence value on the basis of the received uncertainty values; and cause the output unit to output the at least one confidence value, store it in a data memory and transmit it to a separate computer system. Claim 15 is substantially similar to claim 1 is rejected based on similar analyses. Regarding Claim 16, Nett discloses a computer-readable storage medium (Nett, [0033] Non-transitory memory”) comprising a computer program which, when loaded into a working memory of a computer system, causes the computer system to execute: providing a trained machine-learning model (MLMt); wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements, wherein the at least one input reference image (RI1(xi)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object, wherein the target reference image (RI2(yi)) is a computed tomography or magnetic resonance image of the reference region of the reference object, wherein the machine-learning model (MLMt) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI1(xi)) a synthetic reference image (RI2*(yi)), wherein the synthetic reference image (RI2*(yi)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(yi)) respectively corresponds to an image element of the target reference image (RI2(yi)), wherein the machine-learning model (MLMt) has been trained to predict for each image element of the synthetic reference image (RI2*(yi)) a color value (?i) and an uncertainty value (y^(xi)) for the predicted color value (yi), and wherein the training comprises minimization of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (yi) or a deviation of the predicted color value (?i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (y^(xi)) as parameters; receiving at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) represents the examination region of the examination object in the first state, wherein the at least one input image (I1(xi)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object; feeding the at least one input image (I1(xi)) to the trained machine-learning model (MLMt); receiving a synthetic image (I2*(yi)) from the trained machine-learning model, wherein the synthetic image (I2*(yi)) represents the examination region of the examination object in the second state; receiving an uncertainty value (y^(xi)) for each image element of the synthetic image (I2*(yi)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value. Claim 16 is substantially similar to claim 1 is rejected based on similar analyses. Claims 2, 3, 4, 5, 17, 20 are rejected under 35 U.S.C. 103 as being unpatentable by Nett et al. (U.S. 2024/0296527 A1) in view of Hawkins-Daarud et al. (U.S. 2022/0148731 A1) and further in view of Yoshiara et al. (U.S. 2010/0094133 A1). Regarding Claim 2, a combination of Nett and Hawkins-Daarud discloses the method according to claim 1, a contrast agent (Net, [0036] “CT imaging device configured to image a subject such as a patient, and contrast agents present within the body” Nett teaches a CT image device configures to image of a patient includes contrast agents withing the body. However, a combination of Nett and Hawkins-Daarud does not explicitly teach wherein the first state is a state before or after administration of a contrast agent and the second state is a state after administration of the contrast agent. Yoshiara teaches the first state is a state before or after administration of a contrast agent and the second state is a state after administration of the contrast agent (Yoshiara, [0058] When the subject P is imaged with ultrasound waves after administration of an ultrasound contrast agent thereto and plural frames of cross-sectional image data are generated by the image generator 24” and [0067] “FIG. 2 shows an image on which the color coding process has been executed, the reference time as time t=0, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent. For example, in the cross-sectional image 200, a region B different from the region A is displayed in green” Yoshiara teaches the first state is a state as time = 0, after administration of a contrast agent, a region A, the kidney is red, Fig. 2) and the second state is a state as time t=0 to t1), after administration of the contrast agent, a region B (liver) different from the region A is displayed in green. Nett, Hawkins-Daarud and Yoshiara are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Nett and Hawkins-Daarud to combine with states of contrast agents (as taught by Yoshiara) in order to apply states of contrast agent because Yoshiara can provide the the first state is a state as time = 0, after administration of a contrast agent, a region A, the kidney is red, Fig. 2) and the second state is a state as time t=0 to t1), after administration of the contrast agent, a region B (liver) different from the region A is displayed in green (Yoshiara, Fig. 2, [0058], [0067]). Doing so, it may provide the operator can easily grasp the differences in relative inflow times of the ultrasound contrast agent in plural regions (Yoshiara, [0018]). Regarding Claim 3, the method according to claim 1, a combination of Nett and Hawkins-Daarud does not explicitly teach wherein the first state and the second state each indicate an amount of a contrast agent which has been administered to the reference object and the examination object. However, Yoshiara the first state and the second state each indicate an amount of a contrast agent which has been administered to the reference object and the examination object (Yoshiara, [0019] “a predetermined site of a subject to which a contrast agent has been administered; a contrast agent inflow detector configured to detect the inflow of said contrast agent into said each region based on the signal intensity in said each region” and [0067] “FIG. 2 shows an image on which the color coding process has been executed, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, in a second time interval (t=0 to t1) into green. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), a region B different from the region A is displayed in green” Yoshiara teaches the first state (t=0) and the second state (t=0 to t1), each indicate an amount of a contrast agent to reference object and the examination object e.g., a region A, the kidney, is red, a region B, the liver, is green, Fig. 2. Nett, Hawkins-Daarud and Yoshiara are combinable see rationale in claim 2. Regarding Claim 4, Nett discloses the method according to claim 1, a combination of Nett and Hawkins-Daarud does not explicitly teach wherein the at least one input image (I1(xi)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object after administration of a second amount of the contrast agent, and wherein the synthetic image (I2*(yi)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I2*(yi)) represents the examination region of the examination object after administration of a third amount of the contrast agent, wherein the second amount is different from, the first amount and the third amount is different from, the first amount and the second amount. However, Yoshiara teaches wherein the at least one input image (I1(xi)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object after administration of a second amount of the contrast agent (Yoshiara, [0004] “a system for an ultrasound diagnosis is an X-ray Computed Tomography (CT) apparatus, and a Magnetic Resonance Imaging (MRI) apparatus and [0035] “An ultrasound imaging apparatus 10 includes an ultrasound probe 12, an apparatus main body 11, an input device 13” and [0058] When the subject P is imaged with ultrasound waves after administration of an ultrasound contrast agent thereto and plural frames of cross-sectional image data are generated by the image generator 24” and [0067] “FIG. 2 shows an image on which the color coding process has been executed, the reference time as time t=0, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent, in the cross-sectional image 200, a region B different from the region A is displayed in green” Yoshiara teaches the first computed tomography or magnetic resonance image represents the examination region of the examination object (e.g., a the kidney, region A, Fig. 2) after administration of a first amount of the contrast agent (pixels is converted to red) and the second computed tomography or magnetic resonance image represents the examination region of the examination object (e.g., a the liver, region B) after administration of a second amount of the contrast agent (pixels is converted to green), and wherein the synthetic image (I2*(yi)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I2*(yi)) represents the examination region of the examination object after administration of a third amount of the contrast agent, wherein the second amount is different from, the first amount and the third amount is different from, the first amount and the second amount (Yoshiara, [0067] FIG. 2 shows an image on which the color coding process has been executed, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green, and converts the hue of pixels where the arrival time is included in a third time interval (t=t1 to t2) into blue. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent, in the cross-sectional image 200, a region B different from the region A is displayed in green, and a region C is displayed in blue” Yoshiara teaches the synthetic image represents the examination region of the examination object after administration of a third amount of the contrast agent (e.g., the liver, region C, pixels is converted to blue) wherein the third amount (blue) is different from, the first amount (red) and the second amount (green). Nett, Hawkins-Daarud and Yoshiara are combinable see rationale in claim 2. Regarding Claim 5, the method according to claim 1, a combination of Nett and Hawkins-Daarud does not explicitly teach wherein the at least one input image (I1(xi)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object in a first period of time before or after administration of a contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object in a second period of time after administration of the contrast agent, and wherein the synthetic image (I2*(yi)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I2*(yi)) represents the examination region of the examination object in a third period of time after administration of the contrast agent, wherein the second period of time follows the first period of time, and the third period of time follows the second period of time. However, Yoshiara teaches wherein the at least one input image (I1(xi)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object in a first period of time before or after administration of a contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object in a second period of time after administration of the contrast agent (Yoshiara, [0004] “a system for an ultrasound diagnosis is an X-ray Computed Tomography (CT) apparatus, and a Magnetic Resonance Imaging (MRI) apparatus” and [0067] “FIG. 2 shows an image on which the color coding process has been executed, the reference time as time t=0, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent, in the cross-sectional image 200, a region B different from the region A is displayed in green” Yoshiara teaches the first computed tomography or magnetic resonance image represents the examination region of the examination object in a first period of time (e.g., the kidney, region A, first interval time t=0, Fig. 2) after administration of a contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object in a second period of time (e.g., e.g., the liver, region B, second interval time t=0 to t1, Fig. 2 after administration of the contrast agent, and wherein the synthetic image (I2*(yi)) is a synthetic computed tomography or magnetic resonance image, wherein the synthetic image (I2*(yi)) represents the examination region of the examination object in a third period of time after administration of the contrast agent, wherein the second period of time follows the first period of time, and the third period of time follows the second period of time (Yoshiara, [0067] FIG. 2 shows an image on which the color coding process has been executed, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green, and converts the hue of pixels where the arrival time is included in a third time interval (t=t1 to t2) into blue. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent, in the cross-sectional image 200, a region B different from the region A is displayed in green, and a region C is displayed in blue” Yoshiara teaches the synthetic image (I2*(yi)) represents the examination region of the examination object in a third period of time (e.g., the liver, region C, a third time interval, (t=t1 to t2) after administration of the contrast agent, wherein the second period of time ((t=0 to t1) follows the first period of time (t=0 or less), and the third period of time (t=t1 to t2) follows the second period of time Nett, Hawkins-Daarud and Yoshiara are combinable see rationale in claim 2. Regarding Claim 17, a combination of Nett, Hawkins-Daarud and Yoshiara disclose a contrast agent (Nett, [0036] “contrast agents) for use in a radiological examination method (Nett, [0101] “ method 500 includes method 500 includes the noise-reduced image may be displayed on the display screen in real time during an examination of the subject”, the method comprising: providing a trained machine-learning model (MLMt); wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements, wherein the at least one input reference image (RI1(xi)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object in the first state, wherein the target reference image (RI2(yi)) is a computed tomography or magnetic resonance image of the reference region of the reference object in the second state, and wherein the machine-learning model (MLMt) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI1(xi)) a synthetic reference image (RI2*(yi)), wherein the synthetic reference image (RI2*(yi)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(yi)) respectively corresponds to an image element of the target reference image (RI2(yi)), wherein the machine-learning model (MLMt) has been trained to predict for each image element of the synthetic reference image (RI2*(y^i)) a color value (y^i) and an uncertainty value (σ^(xi)) for the predicted color value (yi), and wherein the training comprises minimization of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (yi) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters; receiving at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object in the first state; feeding the at least one input image (I1(xi)) to the trained machine-learning model (MLMt); receiving a synthetic image (I2*(yi)) from the trained machine-learning model, wherein the synthetic image (I2*(yi)) comprises a synthetic radiological image representing the examination region of the examination object in the second state; receiving an uncertainty value (σ^(xi)) for each image element of the synthetic image (I2*(yi)); determining at least one confidence value on the basis of the received uncertainty values; and outputting the at least one confidence value. However, a combination of Nett and does not explicitly teach wherein the first state represents the reference region of the reference object in a first period of time before or after the administration of the contrast agent and the second state represents the reference region of the reference object in a second period of time after the administration of the contrast agent, and/or the first state represents the reference region of the reference object before or after the administration of a first amount of the contrast agent and the second state represents the reference region of the reference object after the administration of a second amount of the contrast agent; Yoshiara teaches wherein the first state represents the reference region of the reference object in a first period of time before or after the administration of the contrast agent and the second state represents the reference region of the reference object in a second period of time after the administration of the contrast agent, and/or the first state represents the reference region of the reference object before or after the administration of a first amount of the contrast agent and the second state represents the reference region of the reference object after the administration of a second amount of the contrast agent ((Yoshiara, [0004] “a system for an ultrasound diagnosis is an X-ray Computed Tomography (CT) apparatus, and a Magnetic Resonance Imaging (MRI) apparatus” and [0067] “FIG. 2 shows an image on which the color coding process has been executed, the reference time as time t=0, the color coding part 40C converts the hue of pixels where the arrival time is included in a first time interval (t=0 or less) into red, converts the hue of pixels where the arrival time is included in a second time interval (t=0 to t1) into green. Consequently, a region A in which the region of interest is set (for example, the kidney) is displayed with a reference hue (red), and other regions (such as the liver) are displayed with hues corresponding to the relative times of arrival of the ultrasound contrast agent, in the cross-sectional image 200, a region B different from the region A is displayed in green” Yoshiara teaches the first computed tomography or magnetic resonance image represents the examination region of the examination object in a first period of time (e.g., the kidney, region A, first interval time t=0, Fig. 2) after administration of a contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object in a second period of time (e.g., e.g., the liver, region B, second interval time t=0 to t1, Fig. 2 after administration of the contrast agent. Nett, Hawkins-Daarud and Yoshiara are combinable see rational in claim 2. Claim 17 is substantially similar to claim 1 is rejected based on similar analyses. Regarding Claim 20, a combination of Nett, Hawkins-Daarud and Yoshiara disclose a kit (Nett, [0109] “one object, e.g., a material” Nett teaches a kit (a material) comprising a contrast agent (Nett, [0036] “contrast agents) and a computer program which can be loaded into a working memory of a computer system (Nett, [0030] “a processor 104 configured to execute machine readable instructions stored in non-transitory memory 106”, wherein the computer program causes the computer system to: provide a trained machine-learning model (MLMt), wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements, wherein the at least one input reference image (RI1(xi)) comprises at least one computed tomography or magnetic resonance image of the reference region of the reference object in the first state, wherein the target reference image (RI2(yi)) is a computed tomography or magnetic resonance image of the reference region of the reference object in the second state, and wherein the machine-learning model (MLMt) is configured and has been trained to generate for each reference object on the basis of the at least one input reference image (RI1(xi)) a synthetic reference image (RI2*(y^i)), wherein the synthetic reference image (RI2*(y^i)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(y^i)) respectively corresponds to an image element of the target reference image (RI2(yi)), wherein the machine-learning model (MLMt) has been trained to predict for each image element of the synthetic reference image (RI2*(y^i)) a color value (y^i) and an uncertainty value (σ^(xi)) for the predicted color value (y^i), and wherein the training comprises minimization of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters; receive at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) comprises at least one computed tomography or magnetic resonance image of the examination region of the examination object in the first state; feed the at least one input image (I1(xi)) to the trained machine-learning model (MLMt); r eceive a synthetic image (I2*(y^i)) from the trained machine-learning model, wherein the synthetic image (I2*(y^i)) comprises a synthetic radiological image representing the examination region of the examination object in the second state; receive an uncertainty value (σ^(xi)) for each image element of the synthetic image (I2*(y^i)); determine at least one confidence value on the basis of the received uncertainty values; and output the at least one confidence value. Claim 20 is substantially similar to claim 17 is rejected based on similar analyses. Allowable Subject Matter Dependent claims 7-14, 18, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding to independent claims 1, 15, 16, 17,20 the closest prior art references the examiner found are Nett et al. (U.S. 2020/0219223 A1) in view of Hawkins-Daarud et al. (U.S. 2022/0148731 A1) and further in view of Yoshiara et al. (U.S. 2010/0094133 A1) have been made of record as teaching: providing a trained machine-learning model (MLMt) (Nett, [0024]); wherein the trained machine-learning model (MLMt) has been trained on the basis of training data (TD), wherein the training data (TD) comprise for each reference object of a plurality of reference objects (i) at least one input reference image (RI1(xi)) of a reference region of the reference object in a first state and (ii) a target reference image (RI2(yi)) of the reference region of the reference object in a second state, wherein the at least one input reference image (RI1(xi)) and the target reference image (RI2(yi)) each comprise a plurality of image elements (Nett, [0002], [0005]); wherein the training comprises minimization of a loss function (L) (Nett, [0069]); receiving at least one input image (I1(xi)) of an examination region of an examination object, wherein the at least one input image (I1(xi)) represents the examination region of the examination object in the first state (Nett, [0081], [0105]); an uncertainty value (σ^(xi)) for the predicted color value (y^i) (Hawkins-Daarud, [0007]); wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation of the predicted color value (y^i) from a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters (Hawkins-Daarud, [0086], [0118]); wherein the at least one input image (I1(xi)) comprises a first computed tomography or magnetic resonance image and a second computed tomography or magnetic resonance image, wherein the first computed tomography or magnetic resonance image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second computed tomography or magnetic resonance image represents the examination region of the examination object after administration of a second amount of the contrast agent (Yoshiara, [0004], [0058]), recite on claims 1, 15, 16, 17,20. However, the art of record did not teach or suggest the claim taken as a whole and particular the limitation pertaining “generating a confidence representation, wherein the confidence representation comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region, wherein each image element of the plurality of image elements respectively corresponds to an image element of the synthetic image (I2*(yi)), wherein each image element of the plurality of image elements has a color value, wherein the color value correlates with the respective uncertainty value (y^(xi)) of the predicted color value (y^i) of the corresponding image element of the synthetic image (I2*(yi)); and outputting the confidence representation superimposed on the synthetic image (I2*(yi))”, recites in claim 7. “wherein the at least one confidence value is a confidence value for the entire synthetic image (I2*(yi)), wherein the confidence value is a mean or a maximum value or a minimum value that is formed on the basis of all uncertainty values (σ^(xi)) of all image elements of the synthetic image (I2*(yi))”, recites in claim 8. “determining a confidence value for one or more sub-regions of the synthetic image (I2*(yi)); and outputting the confidence value for the one or more sub-regions of the synthetic image (I2*(yi))”, recites in claim 9. “wherein different methods for calculating the confidence value are used for different sub-regions of the examination region of the examination object”, recites in claim 10. “wherein the trained machine-learning model (MLMt) is configured and has been trained to increase, in the event of an increase in the deviation of the predicted color value (yi) of the synthetic reference image (RI2*(y^i)) from the color value (yi) of the corresponding image element of the target reference image (RI2(yi)), the uncertainty value (σ^(xi)) of the predicted color value (yi) in order to minimize the loss function”, recites in claim 11. “wherein an increase in the deviation of the predicted color value (yi) of the synthetic reference image (RI2*(yi)) from the color value (yi) of the corresponding image element of the target reference image (RI2(yi)) leads to an increase in a loss calculated by means of the loss function (L), wherein an increase in the uncertainty value (σ^(xi)) leads to a decrease in the loss calculated by means of the loss function (L)”, recites in claim 12. “wherein the loss function (L) comprises the following equation (1): L=1N?i=1N12?^(xi)2yi-y^i2+12log??^(xi)2�������������(1)wherein N is the number of image elements of the synthetic reference image (RI2*(yi)), yi is the predicted color value of the image element i of the synthetic reference image (RI2*(yi)), y^(xi) is the uncertainty value of the predicted color value of the image element i, and yi is the color value of the corresponding image element of the target reference image (RI2(yi))” recites in claim 13. “wherein the training of a machine-learning model (MLM) comprises: receiving the training data (TD); wherein the training data (TD) comprise for each reference object of the plurality of reference objects (i) the at least one input reference image (RI1(xi)) of the reference region of the reference object in the first state and (ii) the target reference image (RI2(yi)) of the reference region of the reference object in the second state, wherein the second state is different from the first state, wherein the at least one input reference image (RI1(xi)) comprises a plurality of image elements, wherein each image element of the at least one input reference image (RI1(xi)) represents a sub-region of the reference region, wherein each image element of the at least one input reference image (RI1(xi)) is characterized by a color value (xi), and wherein the target reference image (RI2(yi)) comprises a plurality of image elements, wherein each image element of the target reference image (RI2(yi)) represents a sub-region of the reference region, wherein each image element of the target reference image (RI2(yi)) is characterized by a color value (yi); providing the machine-learning model (MLM); wherein the machine-learning model (MLM) is configured to generate on the basis of the at least one input reference image (RI1(xi)) of the reference region of a reference object and model parameters (MP) of a synthetic reference image (RI2*(yi)) of the reference region of the reference object, wherein the synthetic reference image (RI2*(yi)) comprises a plurality of image elements, wherein each image element of the synthetic reference image (RI2*(yi)) corresponds to an image element of the target reference image (RI2(yi)), wherein each image element of the synthetic reference image (RI2*(yi)) is assigned a predicted color value (y^i), wherein the machine-learning model (MLM) is configured to predict for each predicted color value (y^i) an uncertainty value (σ^(xi)); training the machine-learning model (MLM), wherein the training for each reference object of the plurality of reference objects comprises: inputting the at least one input reference image (RI1(xi)) into the machine-learning model (MLM); receiving the synthetic reference image (RI2*(yi)) from the machine-learning model (MLM); receiving an uncertainty value (σ^(xi)) for each predicted color value (y^i) of the synthetic reference image (RI2*(yi)); calculating a loss by means of a loss function (L), wherein the loss function (L) comprises (i) the predicted color value (y^i) or a deviation between the predicted color value (y^i) and a color value (yi) of the corresponding image element of the target reference image (RI2(yi)) and (ii) the predicted uncertainty value (σ^(xi)) as parameters; and reducing the loss by modification of model parameters (MP); and outputting and storing the trained machine-learning model (MLMt) or transmitting the trained machine-learning model (MLMt) to a separate computer system; and using the trained machine-learning model (MLMt) to predict a synthetic image and to generate at least one confidence value for a synthetic image” recites in claim 14. “wherein the contrast agent comprises one or more of the following compounds: gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid; gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid; gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate; dihydrogen [(�)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-); tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate; gadolinium 2,2',2''-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; gadolinium 2,2',2''-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium (2S,2'S,2''S)-2,2',2''-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy] phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate); gadolinium 2,2',2''-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate; gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate; gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate; gadolinium(III) 2,2',2''-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; a Gd3+ complex of a compound of formula (I)(I) , wherein: Ar is a group selected from:and, wherein # is a linkage to X; X is a group selected from: CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-# ; wherein * is a linkage to Ar and # is a linkage to an acetic acid residue; R1, R2 and R3 are each independently a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R4 is a group selected from C2-C4 alkoxy, (H3C-CH2)-O-(CH2)2-O-, (H3C-CH2)-O-(CH2)2-O-(CH2)2-O- and (H3C-CH2)-O-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R5 is a hydrogen atom; and R6 is a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof, a Gd3+ complex of a compound of formula (II)(II) , wherein: Ar is a group selected from:and, wherein # is a linkage to X; X is a group selected from: CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-#; wherein * is a linkage to Ar and # is a linkage to an acetic acid residue; R7 is a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R8 is a group selected from: C2-C4 alkoxy, (H3C-CH2O)-(CH2)2-O-, (H3C-CH2O)-(CH2)2-O-(CH2)2-O- and (H3C-CH2O)-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R9 and R10 are independently a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof”, recites in claim 18. “wherein the contrast agent comprises one or more of the following compounds: gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid; gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid; gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15-tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5-oxopentanoate; dihydrogen [(�)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-); tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]amino}methyl)-4,7,11,14-tetraazaheptadecan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate; gadolinium 2,2',2''-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; gadolinium 2,2',2''-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium (2S,2'S,2''S)-2,2',2''-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy] phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate); gadolinium 2,2',2''-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}triacetate; gadolinium 2,2',2''-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate; gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1-carboxylate hydrate; gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate; gadolinium(III) 2,2',2''-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate; a Gd3+ complex of a compound of formula (I) (I) , wherein: Ar is a group selected from:and, wherein # is a linkage to X; X is a group selected from: CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-# ; wherein * is a linkage to Ar and # is a linkage to an acetic acid residue; R1, R2 and R3 are each independently a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R4 is a group selected from C2-C4 alkoxy, (H3C-CH2)-O-(CH2)2-O-, (H3C-CH2)-O-(CH2)2-O-(CH2)2-O- and (H3C-CH2)-O-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R5 is a hydrogen atom; and R6 is a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof, a Gd3+ complex of a compound of formula (II) (II) , wherein: Ar is a group selected from:and, wherein # is a linkage to X; X is a group selected from: CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-#; wherein * is a linkage to Ar and # is a linkage to an acetic acid residue; R7 is a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R8 is a group selected from: C2-C4 alkoxy, (H3C-CH2O)-(CH2)2-O-, (H3C-CH2O)-(CH2)2-O-(CH2)2-O- and (H3C-CH2O)-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R9 and R10 are independently a hydrogen atom; or a stereoisomer, a tautomer, a hydrate, a solvate or a salt thereof, or a mixture thereof”, recites in claim 19. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”. Conclusion The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure Cashman et al. (U.S. 2023/0281945 A1) and Regensburger et al. (U.S. 2024/0104740 A1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHOA VU/Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598266
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597087
HIGH-PERFORMANCE AND LOW-LATENCY IMPLEMENTATION OF A WAVELET-BASED IMAGE COMPRESSION SCHEME
2y 5m to grant Granted Apr 07, 2026
Patent 12578941
TECHNIQUE FOR INTER-PROCEDURAL MEMORY ADDRESS SPACE OPTIMIZATION IN GPU COMPUTING COMPILER
2y 5m to grant Granted Mar 17, 2026
Patent 12567181
SYSTEMS AND METHODS FOR REAL-TIME PROCESSING OF MEDICAL IMAGING DATA UTILIZING AN EXTERNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12548431
CONTEXTUALIZED AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+15.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month