Prosecution Insights
Last updated: April 19, 2026
Application No. 18/732,938

DETECTION OF ARTIFACTS IN SYNTHETIC MEDICAL IMAGES

Non-Final OA §101§103
Filed
Jun 04, 2024
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
BAYER AKTIENGESELLSCHAFT
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement 2. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 09/24/2024. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed machine-readable medium includes signal media within its scope. Applicant is advised that this rejection may be overcome by amending the claims to recite that the computer-readable storage medium is non-transitory. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 5. Claims 1-2, 4-13 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ravishankar et al., (“Ravishankar”) [US-2023/0135351-A1] in view of Li et al., (“Li”) [US-2015/0023578-A1] Regarding claim 1, Ravishankar discloses a computer-implemented method (Ravishankar- ¶0011, at least discloses systems and methods for quantifying uncertainty of segmentation masks produced by machine learning models) comprising: receiving at least one image (I1, I2) of an examination region of an examination object, wherein the at least one image (I1, I2) comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region (Ravishankar- Fig. 1 and ¶0012, at least disclose a medical imaging workflow includes measurement of an anatomical region of interest [image of an examination region of an examination object] based on positions of one or more calipers, wherein the positions of the calipers may be determined based on a segmentation mask [image elements] of the anatomical region of interest […] an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest [one image of an examination region of an examination object]; Figs. 1-2 and ¶0023, at least disclose Method 200 begins at operation 202, wherein the image processing system receives an image comprising a region of interest […] at least a portion of image processing system 100 is disposed at a device (e.g., edge device, server, etc.) communicably coupled to a medical imaging system via wired and/or wireless connections.); generating a plurality of different modifications (M11, M12, M13, M21, M22, M23) of the at least one received image (I1, I2) (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0024, at least discloses At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images [generating a plurality of different modifications (M11, M12, M13, M21, M22, M23) of the at least one received image] […] at operation 204, the image processing system determines a set of augmentations, applies the augmentations to the image received at operation 202 to generate an augmented image, and repeats this process N times to produce N distinctly augmented versions of the image received at operation 202, where N is a positive integer greater than 2 […] the number of augmented images, N, produced at operation 202 is between 4 and 50. As described in more detail with reference to FIG. 3 , augmentations applied at operation 204 may include one or more of a translation, rotation, zoom adjustment, simulated scanner depth adjustment, a simulated scanner gain adjustment, or augmentations specific [modifications] to the modality of the image received at operation 202); generating a plurality of synthetic images (S1, S2, S3) of the examination region of the examination object on the basis of the modifications (M11, M12, M13, M21, M22, M23) by means of a generative model (GM) (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model [means of a generative model (GM)], and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [a plurality of synthetic images] corresponding to the plurality of augmented images), wherein each synthetic image (S1, S2, S3) comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region (Ravishankar- Fig. 4 and ¶0031, at least disclose lighter regions correspond to sub-regions [image elements] of input image 402 classified as belonging to the region of interest (e.g., sub-regions with segmentation labels indicating the sub-region is a region of interest). By showing the mean segmentation mask produced from the plurality of segmentation masks, a more accurate estimate of the position and extent of the region of interest may be produced, and conveyed to a user), wherein each image element is assigned at least one color value (Ravishankar- Fig. 4 and ¶0030-0031, at least disclose uncertainty map 408 indicates regions of greater uncertainty/variability of segmentation labels for the anatomical region of interest in lighter colored regions, with darker colored regions indicating less segmentation uncertainty/variability. In particular, uncertainty map 408 shows the boundary 414 of the anatomical region of interest as a ring of lighter color, indicating greater segmentation label variability […] lighter regions correspond to sub-regions [image elements] of input image 402 classified as belonging to the region of interest (e.g., sub-regions with segmentation labels indicating the sub-region is a region of interest); determining a measure of dispersion of the color values of corresponding image elements of the generated synthetic images (S1, S2, S3) (Ravishankar- ¶0012, at least discloses The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map, showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks; ¶0028, at least discloses Portions of the region of interest which show little or no variation amongst the plurality of segmentation masks are considered as more certain than portions of the region of interest which show greater variation. In other words, if a particular pixel of an input image is consistently classified as belonging to a region of interest (or not belonging to the region of interest), invariant of the augmentations applied to the input image, the pixel classification may be considered more certain (less uncertain) than a pixel for which the classification is highly augmentation dependent. Therefore, at operation 210, the image processing system may assess, for each corresponding point/pixel/voxel of the plurality of segmentation masks, a degree of segmentation label variation. In some embodiments, the variation may be determined as one or more of the range, standard deviation, or variance of the vector of values (segmentation labels) across each of the plurality of segmentation masks. As variation is assessed on a pixel-by-pixel basis (or voxel-by-voxel), the determination of uncertainty at operation 210 may be described as determining the pixel-wise variation of the segmentation labels across each of the plurality of segmentation masks); determining at least one confidence value on the basis of the determined measure of dispersion (Ravishankar- ¶0012, at least discloses quantifying uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty [determining at least one confidence value] associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence value], showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty [determining at least one confidence value] of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks); and outputting the at least one confidence value or an item of information based on the at least one confidence value (Ravishankar- Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty [confidence value] determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown. Uncertainty map 408 shows the uncertainty associated with the segmented anatomical region of interest). Ravishankar does not explicitly disclose, but Li discloses mutually corresponding image elements represent the same sub-region of the examination region (Li- ¶0029, at least discloses the sub-regions are arranged in the following way: dividing the region to be analyzed into a plurality of adjacent sub-regions which are mutually overlapped, wherein the overlapped or the non-overlapped regions of all the adjacent sub-regions jointly and completely cover the target region; ¶0046, at least discloses An investigation point is arranged in the region to be analyzed, and a circular or oval sub-region around the point is a cell or “cell filled with pixels”. The sub-regions are mutually overlapped, and the distribution characteristic of pixel values or voxel values in each sub-region is analyzed to deduce a fixed or non-fixed threshold value. Each pixel or voxel in the region around each investigation point is marked according to the threshold value, to obtain the tissue region of interest, and the border thereof is the border of the tissue of interest). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar to incorporate the teachings of Li, and apply the sub-regions are mutually overlapped into Ravishankar’s teachings for determining a measure of dispersion of the color values of corresponding image elements of the generated synthetic images, wherein mutually corresponding image elements represent the same sub-region of the examination region. Doing so would improve and enhance the accuracy of physiological parameters which are related to the volume of a cardiac chamber, ejection fraction, myocardial volume and mass, and the like, and further to assist in timely achieving correct diagnosis in clinical treatment process. Regarding claim 2, Ravishankar in view of Li, discloses the method according to claim 1, and further discloses comprising: receiving a first image (I1) and a second image (I2) of the examination region of the examination object (Ravishankar- Fig. 4 and ¶0023-0030, at least disclose the segmentation mask 404, 406 and 408); generating a first modification (M11) of the first image (I1), a second modification (M12) of the first image (I1), a first modification (M21) of the second image (I2) and a second modification (M22) of the second image (I2) (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0024, at least discloses At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images [generating a plurality of different modifications] […] at operation 204, the image processing system determines a set of augmentations, applies the augmentations to the image received at operation 202 to generate an augmented image, and repeats this process N times to produce N distinctly augmented versions of the image received at operation 202, where N is a positive integer greater than 2 […] the number of augmented images, N, produced at operation 202 is between 4 and 50. As described in more detail with reference to FIG. 3 , augmentations applied at operation 204 may include one or more of a translation, rotation, zoom adjustment, simulated scanner depth adjustment, a simulated scanner gain adjustment, or augmentations specific [modifications] to the modality of the image received at operation 202); generating a first synthetic image (S1) on the basis of the first modification (M11) of the first image (11) and the first modification (M21) of the second image (2) by means of the generative model (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model [means of a generative model (GM)], and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [a plurality of synthetic images] corresponding to the plurality of augmented images); generating a second synthetic image (S2) on the basis of the second modification (M12) of the first image (11) and the second modification (M22) of the second image (12) by means of the generative model (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model [means of a generative model (GM)], and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [a plurality of synthetic images] corresponding to the plurality of augmented images); determining a respective measure of dispersion of the color values of corresponding image elements of the generated synthetic images for each tuple of corresponding image elements (Ravishankar- ¶0012, at least discloses The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map, showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks; ¶0028, at least discloses Portions of the region of interest which show little or no variation amongst the plurality of segmentation masks are considered as more certain than portions of the region of interest which show greater variation. In other words, if a particular pixel of an input image is consistently classified as belonging to a region of interest (or not belonging to the region of interest), invariant of the augmentations applied to the input image, the pixel classification may be considered more certain (less uncertain) than a pixel for which the classification is highly augmentation dependent. Therefore, at operation 210, the image processing system may assess, for each corresponding point/pixel/voxel of the plurality of segmentation masks, a degree of segmentation label variation. In some embodiments, the variation may be determined as one or more of the range, standard deviation, or variance of the vector of values (segmentation labels) across each of the plurality of segmentation masks. As variation is assessed on a pixel-by-pixel basis (or voxel-by-voxel), the determination of uncertainty at operation 210 may be described as determining the pixel-wise variation of the segmentation labels across each of the plurality of segmentation masks); determining a respective confidence value for each tuple of corresponding image elements of the generated synthetic images on the basis of the respective measure of dispersion of the tuple (Ravishankar- ¶0012, at least discloses quantifying uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty [determining a respective confidence value] associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence value], showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty [determining a respective confidence value] of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks); and outputting the confidence values or an item of information based on the confidence values (Ravishankar- Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty [confidence value] determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown. Uncertainty map 408 shows the uncertainty associated with the segmented anatomical region of interest). Regarding claim 4, Ravishankar in view of Li, discloses the method according to claim 1, and discloses the method further comprising: generating a synthetic image (SI) of the examination region of the examination object on the basis of the at least one received image (I1, I2) (Ravishankar- Fig. 1 and ¶0012, at least disclose a medical imaging workflow includes measurement of an anatomical region of interest [image of an examination region of an examination object] based on positions of one or more calipers, wherein the positions of the calipers may be determined based on a segmentation mask [image elements] of the anatomical region of interest […] an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest [examination region of an examination object] […] Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model, and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [generating a synthetic image] corresponding to the plurality of augmented images). Regarding claim 5, Ravishankar in view of Li, discloses the method according to claim 1, and further discloses wherein the measure of dispersion is, or is derived from, at least one of a range, a standard deviation, a variance, a sum of squared deviations, a coefficient of variation, a mean absolute deviation, a quantile range, an interquantile range, a mean absolute deviation from a median, a median absolute deviation and a geometric standard deviation of the color values of corresponding image elements (Ravishankar- ¶0028, at least discloses the variation may be determined as one or more of the range, standard deviation, or variance of the vector of values (segmentation labels) across each of the plurality of segmentation masks). Regarding claim 6, Ravishankar in view of Li, discloses the method according to claim 1, and further discloses wherein each modification (M11, M12, M13, M21, M22, M23) is generated by image augmentation of the at least one received image (I1, I2) (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0024, at least discloses At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images […] at operation 204, the image processing system determines a set of augmentations, applies the augmentations to the image received at operation 202 to generate an augmented image […] augmentations applied at operation 204 may include one or more of a translation, rotation, zoom adjustment, simulated scanner depth adjustment, a simulated scanner gain adjustment, or augmentations specific [modification] to the modality of the image received at operation 202). Regarding claim 7, Ravishankar in view of Li, discloses the method according to claim 6, wherein the image augmentation comprises at least one of reflection, rotation, translation, scaling, homothety, shearing, distortion, addition of noise, variation of color values, setting of color values to zero or some other value or to a random value within defined limits, row-by-row shifting of image elements by a defined absolute value or by a random absolute value within defined limits, column-by-column shifting of image elements by a defined absolute value or by a random absolute value within defined limits, reduction or increase of color values by a defined absolute value or by a random absolute value within defined limits, changing of the sharpness or contrast of an image, and partial blending of two or more images of the at least one received image (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0024, at least discloses At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images […] at operation 204, the image processing system determines a set of augmentations, applies the augmentations to the image received at operation 202 to generate an augmented image […] augmentations applied at operation 204 may include one or more of a translation, rotation, zoom adjustment, simulated scanner depth adjustment, a simulated scanner gain adjustment, or augmentations specific to the modality of the image received at operation 202). Regarding claim 8, Ravishankar in view of Li, discloses the method according to claim 1, and discloses the method further comprising: generating a combined synthetic image (S) on the basis of the synthetic images (S1, S2, S3) (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model, and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [synthetic images] corresponding to the plurality of augmented images), wherein the generation of the combined synthetic image (S) comprises: for each tuple of corresponding image elements of the synthetic images (S1, S2, S3) (Ravishankar- Fig. 1 and ¶0012, at least disclose a medical imaging workflow includes measurement of an anatomical region of interest based on positions of one or more calipers, wherein the positions of the calipers may be determined based on a segmentation mask [image elements] of the anatomical region of interest): determining an average color value by averaging of the color values of the corresponding image elements and setting the average color value as the color value of the corresponding image element of the combined synthetic image (S) (Ravishankar- ¶0040, at least discloses determining a mean segmentation mask based on the plurality of segmentation masks by taking pixel-wise averages across spatially corresponding pixels from the plurality of segmentation masks). Regarding claim 9, Ravishankar in view of Li, discloses the method according to claim 8, and discloses the method further comprising: outputting the combined synthetic image (S) and transmitting the combined synthetic image (S) to a separate computer system; or outputting a synthetic image (SD generated on the basis of the at least one received image (I1, I2) and transmitting the synthetic image (SI) generated on the basis of the at least one received image (I1, I2) to a separate computer system (Ravishankar- ¶0012-0013, at least discloses producing a plurality of segmentation masks [synthetic images] corresponding to the plurality of augmented images […] at least a portion of image processing system 100 is disposed at a separate device (e.g., a workstation) which can receive images/maps from a medical imaging system or from a storage device which stores the images/data generated by the medical imaging system; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown). Regarding claim 10, Ravishankar in view of Li, discloses the method according to claim 9, and discloses the method further comprising: generating a confidence representation (SR), wherein the confidence representation (SR) comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region, wherein each image element has a color value, wherein the color value correlates with the respective confidence value of each tuple of corresponding image elements of the synthetic images (Ravishankar- ¶0012, at least discloses quantifying uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence representation], showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks; Fig. 4 and ¶0030-0031, at least disclose uncertainty map 408 indicates regions of greater uncertainty/variability of segmentation labels for the anatomical region of interest in lighter colored regions, with darker colored regions indicating less segmentation uncertainty/variability. In particular, uncertainty map 408 shows the boundary 414 of the anatomical region of interest as a ring of lighter color, indicating greater segmentation label variability […] lighter regions correspond to sub-regions [image elements] of input image 402 classified as belonging to the region of interest (e.g., sub-regions with segmentation labels indicating the sub-region is a region of interest)); and outputting the confidence representation (SR), in a superimposition with the combined synthetic image (S) or with the synthetic image (SD) generated on the basis of the at least one received image (I1, I2), and transmitting the confidence representation (SR) to a separate computer system (Ravishankar- Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty [confidence] determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map [outputting the confidence representation] produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown. Uncertainty map 408 shows the uncertainty [outputting the confidence representation] associated with the segmented anatomical region of interest). Regarding claim 11, Ravishankar in view of Li, discloses the method according to claim 8, and discloses the method further comprising: determining a confidence value for one or more sub-regions of the combined synthetic image (S) or for the entire combined synthetic image (S) (Ravishankar- ¶0012, at least discloses quantifying uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty [determining a confidence value] associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence value], showing the spatial distribution of segmentation label variation in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty [determining at least one confidence value] of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks); and outputting the confidence value or an item of information based on the confidence value (Ravishankar- Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty [confidence value] determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown. Uncertainty map 408 shows the uncertainty associated with the segmented anatomical region of interest). Regarding claim 12, Ravishankar in view of Li, discloses the method according to claim 1, and further discloses wherein the examination object is a human or an animal (Ravishankar- ¶0012, at least discloses an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest […] the uncertainty maps produced herein may be used to automatically identify a least certain caliper placement, wherein the least certain caliper may be visually indicated and displayed to a user, along with a prompt for adjustment or confirmation of the caliper placement, as illustrated by the graphical user interface 500 shown in FIG. 5; Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Li- ¶0002, at least discloses The anatomical images mainly describe human body's morphological information including X-ray transmission imaging, CT, MRI, US and so on; ¶0044, at least disclose the digitized image is formed by diffusing a contrast agent in chamber and gaps of a human body tissue). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar to incorporate the teachings of Li, and apply the human body's morphological information into Ravishankar’s teachings in order the examination object is a human or an animal. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 13, Ravishankar in view of Li, discloses the method according to claim 9, wherein the at least one received image (I]1, I2) is at least one medical image (Ravishankar- ¶0012, at least discloses an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest), and each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) is a synthetic medical image (Ravishankar- ¶0012-0013, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images. Non-limiting examples of augmentations which may be applied alone, or in combination, to medical images are shown in FIG. 3 […] the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks corresponding to the plurality of augmented images). Regarding claim 16, Ravishankar discloses a computer system (Ravishankar- Fig. 1 and ¶0012, at least disclose an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest) comprising: a receiving unit (Ravishankar- Fig. 1 and ¶0012, at least disclose an image processing system, such as image processing system 100 shown in FIG. 1 may receive a medical image including a region of interest); a control and calculation unit (Ravishankar- Fig. 1 and ¶0014, at least disclose Image processing system 100 includes a processor 104 configured to execute machine readable instructions stored in non-transitory memory 106. Processor 104 may be single core or multi-core, and the programs executed thereon may be configured for parallel or distributed processing); and an output unit (Ravishankar- Fig. 1 and ¶0013, at least disclose Image processing system 100 may comprise a user input device 132, and display device 134; ¶0020, at least discloses Display device 134 may include one or more display devices utilizing virtually any type of technology. In some embodiments, display device 134 may comprise a computer monitor, and may display augmented and or un-augmented ultrasound images); wherein the control and calculation unit (As discussed above) is configured to: cause the receiving unit to receive at least one image (I1, I2) of an examination region of an examination object, wherein the at least one image (I1, I2) comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region (see Claim 1 rejection for detailed analysis); generate a plurality of different modifications (M11, M12, M13, M21, M22, M23) of the received image (I1, I2) (see Claim 1 rejection for detailed analysis); generate a plurality of synthetic images (S1, S2, S3) of the examination region of the examination object on the basis of the modifications (M11, M12, M13, M21, M22, M23) by means of a generative model (GM), wherein each synthetic image (S1, S2, S3) comprises a plurality of image elements, wherein each image element of the plurality of image elements represents a sub-region of the examination region, wherein each image element is assigned at least one color value (see Claim 1 rejection for detailed analysis); determine at least one confidence value on the basis of the color values of corresponding image elements of the modifications (M11, M12, M13, M21, M22, M23)uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty [determining at least one confidence value] associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence value], showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty [determining at least one confidence value] of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks); and cause the output unit (As discussed above) to output the at least one confidence value or an item of information based on the at least one confidence value (see Claim 1 rejection for detailed analysis). Ravishankar does not explicitly disclose mutually corresponding image elements of the modifications (M11, M12, M13, M21, M22, M23), wherein the mutually corresponding image elements represent the same sub-region of the examination region. However, Li discloses mutually corresponding image elements of the modifications, wherein the mutually corresponding image elements represent the same sub-region of the examination region (Li- ¶0029, at least discloses the sub-regions are arranged in the following way: dividing the region to be analyzed into a plurality of adjacent sub-regions which are mutually overlapped, wherein the overlapped or the non-overlapped regions of all the adjacent sub-regions jointly and completely cover the target region; ¶0033, at least discloses the sub-regions are set as spheres, and the average voxel gray level or voxel gradient in each sphere is compared with the threshold parameter and marked [image elements of the modifications]; ¶0046, at least discloses An investigation point is arranged in the region to be analyzed, and a circular or oval sub-region around the point is a cell or “cell filled with pixels”. The sub-regions are mutually overlapped, and the distribution characteristic of pixel values or voxel values in each sub-region is analyzed to deduce a fixed or non-fixed threshold value [mutually image elements of the modifications]. Each pixel or voxel in the region around each investigation point is marked according to the threshold value , to obtain the tissue region of interest, and the border thereof is the border of the tissue of interest). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar to incorporate the teachings of Li, and apply the sub-regions are mutually overlapped into Ravishankar’s teachings for determine at least one confidence value on the basis of the color values of mutually corresponding image elements of the modifications, wherein the mutually corresponding image elements represent the same sub-region of the examination region. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 17, Ravishankar in view of Li, discloses a computer-readable storage medium comprising a computer program which, when loaded into a working memory of a computer system (Ravishankar- Fig. 1 and ¶0014, at least disclose Image processing system 100 includes a processor 104 configured to execute machine readable instructions stored in non-transitory memory 106 […] Non-transitory memory 106 may store machine learning module 108, augmentation module 110, and image data 112. machine learning module 108 may include one or more deep learning networks, comprising a plurality of weights and biases, activation functions, loss functions, gradient descent algorithms, and instructions for implementing the one or more machine learning models to process an input image. For example, machine learning module 108 may store instructions for pre-processing an image, and mapping the pre-processed image to a segmentation mask of a region of interest (ROI)), causes the computer system to execute the method of claim 1. 6. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ravishankar in view of Li, further in view of Dippel et al., (“Dippel”) (WO-2022106302-A1, cited paragraphs refer to the English version of Dippel et al., ("Dippel ") US-2024/0005650-A1) Regarding claim 3, Ravishankar in view of Li, discloses the method according to claim 1, and further discloses the method comprising: generating a respective synthetic image (S1, S2, S3) on the basis of a respective modification (M11, M12, M13, M21, M22, M23) of each of the images (I1, I2) (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images [the modifications] may be fed to a trained machine learning model, and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks [synthetic image] corresponding to the plurality of augmented images; Dippel- Fig. 1 and ¶0091, at least disclose The second set of augmented images consists of images (2-1), (2-2), (2-3), and (2-4). The second set of augmented images is generated by applying one or more modification techniques to each of the spatially augmented images (1-1), (1-2), (1-3), and (1-4). Image (2-1) is generated from image (1-1), image (2-2) is generated from image (1-2), image (2-3) is generated from image (1-3), and image (2-4) is generated from image (1-4).); determining a respective measure of dispersion of the color values of corresponding image elements for each tuple of corresponding image elements of the generated synthetic images (Ravishankar- ¶0012, at least discloses The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map, showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks; ¶0028, at least discloses Portions of the region of interest which show little or no variation amongst the plurality of segmentation masks are considered as more certain than portions of the region of interest which show greater variation. In other words, if a particular pixel of an input image is consistently classified as belonging to a region of interest (or not belonging to the region of interest), invariant of the augmentations applied to the input image, the pixel classification may be considered more certain (less uncertain) than a pixel for which the classification is highly augmentation dependent. Therefore, at operation 210, the image processing system may assess, for each corresponding point/pixel/voxel of the plurality of segmentation masks, a degree of segmentation label variation. In some embodiments, the variation may be determined as one or more of the range, standard deviation, or variance of the vector of values (segmentation labels) across each of the plurality of segmentation masks. As variation is assessed on a pixel-by-pixel basis (or voxel-by-voxel), the determination of uncertainty at operation 210 may be described as determining the pixel-wise variation of the segmentation labels across each of the plurality of segmentation masks); determining a respective confidence value for each tuple of corresponding image elements of the generated synthetic images on the basis of the measure of dispersion of the tuple (Ravishankar- ¶0012, at least discloses quantifying uncertainty of segmentation masks produced by machine learning models, and using said uncertainty to streamline medical imaging workflows […] The image processing system may execute a method, such as method 200 shown in FIG. 2 to segment the region of interest, and quantify the uncertainty [determining a respective confidence value] associated with location and extent of the segmented region of interest […] The variation amongst the plurality of segmentation masks may be used to produce an uncertainty map [confidence value], showing the spatial distribution of segmentation label variation [measure of dispersion] in the plurality of segmentation masks. The segmentation label variation may correlate to an aleatoric uncertainty of the un-augmented input image […] an uncertainty map is shown by uncertainty map 408 in FIG. 4; Fig. 2 and ¶0022, at least disclose a flowchart of a method 200 for determining the uncertainty [determining a respective confidence value] of a segmentation mask produced by a deep learning model; Fig. 2 and ¶0028, at least disclose At operation 210, the image processing system determines the uncertainty associated with the location and extent of the segmented region of interest in the plurality of segmentation masks); and outputting the confidence values or an item of information based on the confidence values (Ravishankar- Fig. 4 and ¶0023, at least disclose Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty [confidence value] determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray; Figs. 2, 4, 5 and ¶0030, at least disclose At operation 214, the image processing system displays the uncertainty map produced at operation 210 via a display device. Turning briefly to FIG. 5 , one example of a graphical user interface 500 which may be used to display the uncertainty map produced at operation 214, is shown. By displaying the uncertainty map, a user may be better informed of the consistency/invariability of the segmentation mask determined and, may be enabled to make a more informed decision regarding whether to accept or reject information derived therefrom. Turning briefly to FIG. 4 , another example of an uncertainty map 408 is shown. Uncertainty map 408 shows the uncertainty associated with the segmented anatomical region of interest). The prior art does not explicitly disclose, but Dippel discloses generating a respective synthetic image (S1, S2, S3) on the basis of a respective modification (M11, M12, M13, M21, M22, M23) of each of the m images (I1, I2) (Dippel- Fig. 1 and ¶0091, at least disclose The second set of augmented images consists of images (2-1), (2-2), (2-3), and (2-4). The second set of augmented images is generated by applying one or more modification techniques to each of the spatially augmented images (1-1), (1-2), (1-3), and (1-4). Image (2-1) is generated from image (1-1), image (2-2) is generated from image (1-2), image (2-3) is generated from image (1-3), and image (2-4) is generated from image (1-4)); receiving a number m of images (I1, I2), wherein m is a positive integer (Dippel- Fig. 1 and ¶0090, at least disclose a plurality of images X, in this example two images, image (0-1) and image (0-2). In a first step (110) a first set of augmented images is generated from the images (0-1) and (0-2)); generating a number p of modifications (M11, M12, M13, M21, M22, M23) of each of the m images (I1, I2), where p is an integer greater than one (Dippel- Fig. 1 and ¶0090, at least disclose The first set of augmented images consists of images (1-1), (1-2), (1-3), and (1-4). Images (1-1) and (1-2) are modified versions of image (0-1), whereas images (1-3) and (1-4) are modified version of image (0-2).); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar/Li to incorporate the teachings of Dippel, and apply the number m of images and the number p of modifications into Ravishankar/Li’s teachings for receiving a number m of images (I1, I2), wherein mis a positive integer; generating a number p of modifications (M11, M12, M13, M21, M22, M23) of each of them images (I1, I2), where p is an integer greater than one; generating a respective synthetic image (S1, S2, S3) on the basis of a respective modification (M11, M12, M13, M21, M22, M23) of each of them images (I1, I2). Doing so would provide new mechanisms for reducing the burden of annotating medical images. 7. Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ravishankar in view of Li, further in view of Zaharchuk et al., ("Zaharchuk") (US-2019/0108634-A1) Regarding claim 14, Ravishankar in view of Li, discloses the method according to claim 9, and further discloses wherein the at least one received image (I1, I2) comprises a first radiological image and a second radiological image (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0023-0024, at least disclose an image which may be received at operation 202 is shown by medical image 402 in FIG. 4 . Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray […] At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images; ¶0037, at least discloses by automatically highlighting caliper placements with greater than a threshold uncertainty, increased efficiency in medical image analysis may be enabled, as a radiologist or technician may be enabled to easily select and adjust calipers positions of greater than a threshold uncertainty), , and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) is a synthetic radiological image (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images may be fed to a trained machine learning model, and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks corresponding to the plurality of augmented images). The prior art does not explicitly disclose, but Zaharchuk discloses wherein the first radiological image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second radiological image represents the examination region of the examination object after administration of a second amount of the contrast agent (Zaharchuk- ¶0007, at least discloses the techniques of the present invention are able to predict a synthesized full-dose contrast agent image from a low-dose contrast agent image [second amount of the contrast agent] and a pre-dose image [first amount of the contrast agent]. The low dose may be any fraction of the full dose, but is preferably 1/10 or less of the full dose; ¶0010, at least discloses the invention provides a method for training a diagnostic imaging device to perform medical diagnostic imaging with reduced contrast agent dose. The method includes a) performing diagnostic imaging of a set of subjects to produce a set of images comprising, for each subject of the set of subjects, i) a full-contrast image acquired with a full contrast agent dose administered to the subject, ii) a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than the full contrast agent dose, and iii) a zero-contrast image acquired with no contrast agent dose administered to the subject; Fig. 2 and ¶0025, at least disclose the workflow of the protocol and procedure for acquisition of images used for training. After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired. An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired), and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) represents the examination region of the examination object after administration of a third amount of the contrast agent, wherein the second amount is different from the first amount and the third amount is different from the first amount and the second amount (Zaharchuk- ¶0007, at least discloses the techniques of the present invention are able to predict a synthesized full-dose contrast agent image [third amount of the contrast agent] from a low-dose contrast agent image [second amount of the contrast agent] and a pre-dose image [first amount of the contrast agent]. The low dose may be any fraction of the full dose, but is preferably 1/10 or less of the full dose; ¶0010-0011, at least discloses the invention provides a method for training a diagnostic imaging device to perform medical diagnostic imaging with reduced contrast agent dose. The method includes a) performing diagnostic imaging of a set of subjects to produce a set of images comprising, for each subject of the set of subjects, i) a full-contrast image acquired with a full contrast agent dose administered to the subject, ii) a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than the full contrast agent dose, and iii) a zero-contrast image acquired with no contrast agent dose administered to the subject […] the invention provides a method for medical diagnostic imaging with reduced contrast agent dose. The method includes a) performing diagnostic imaging of a subject to produce a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than a full contrast agent dose, and a zero-contrast image acquired with no contrast agent dose administered to the subject; b) pre-processing the low-contrast image and zero-contrast image to co-register and normalize the images to adjust for acquisition and scaling differences; and c) applying the low-contrast image and the zero-contrast image as input to a deep learning network (DLN) to generate as output of the DLN a synthesized full-dose contrast agent image of the subject […] The low contrast agent dose is preferably less than 10% of a full contrast agent dose. The diagnostic imaging may be angiography, fluoroscopy, computed tomography (CT), ultrasound, or magnetic resonance imaging; Fig. 2 and ¶0025, at least disclose the workflow of the protocol and procedure for acquisition of images used for training. After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired. An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired.). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar/Li to incorporate the teachings of Zaharchuk, and apply the contrast agent doses into Ravishankar/Li’s teachings in order wherein the at least one received image (Il, 12) comprises a first radiological image and a second radiological image, wherein the first radiological image represents the examination region of the examination object without a contrast agent or after administration of a first amount of the contrast agent and the second radiological image represents the examination region of the examination object after administration of a second amount of the contrast agent, and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) is a synthetic radiological image, wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) represents the examination region of the examination object after administration of a third amount of the contrast agent, wherein the second amount is different from the first amount and the third amount is different from the first amount and the second amount. Doing so would benefit to be able to reduce the dose of contrast agents generally in diagnostic imaging techniques, without sacrificing the image enhancement benefits that the contrast agents provide. Regarding claim 15, Ravishankar in view of Li, discloses the method according to claim 9, and further discloses wherein the at least one received image (I1, I2) comprises a first radiological image and a second radiological image (Ravishankar- ¶0012, at least discloses method 200 includes augmenting the medical image by applying one, or a combination of, augmentations to produce a plurality of augmented images; ¶0023-0024, at least disclose an image which may be received at operation 202 is shown by medical image 402 in FIG. 4 . Image 402 is an ultrasound image of kidney, however it will be appreciated that the current disclosure may be applied to uncertainty determination in medical or non-medical images, and in medical images from a range of imaging modalities, including but not limited to MM, PET, CT, ultrasound, and x-ray […] At operation 204, the image processing system applies a plurality of augmentations to the image to produce a plurality of augmented images; ¶0037, at least discloses by automatically highlighting caliper placements with greater than a threshold uncertainty, increased efficiency in medical image analysis may be enabled, as a radiologist or technician may be enabled to easily select and adjust calipers positions of greater than a threshold uncertainty), and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) is a synthetic radiological image (Ravishankar- ¶0012, at least discloses Each of the plurality of augmented images may be fed to a trained machine learning model, and the machine learning model may map each augmented image to a corresponding segmentation mask, thereby producing a plurality of segmentation masks corresponding to the plurality of augmented images),. The prior art does not explicitly disclose, but Zaharchuk discloses wherein the first radiological image represents the examination region of the examination object in a first period of time before or after administration of a contrast agent and the second radiological image represents the examination region of the examination object in a second period of time after administration of the contrast agent (Zaharchuk- ¶0007, at least discloses the techniques of the present invention are able to predict a synthesized full-dose contrast agent image from a low-dose contrast agent image [second radiological image] and a pre-dose image [first period of time]; ¶0010, at least discloses The method includes a) performing diagnostic imaging of a set of subjects to produce a set of images comprising, for each subject of the set of subjects, i) a full-contrast image acquired with a full contrast agent dose administered to the subject, ii) a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than the full contrast agent dose, and iii) a zero-contrast image acquired with no contrast agent dose administered to the subject; b) pre-processing the set of images to co-register and normalize the set of images to adjust for acquisition and scaling differences between different scans; and c) training a deep learning network (DLN) with the pre-processed set of images by applying zero-contrast images from the set of images and low-contrast images from the set of images as input to the DLN and using a cost function to compare the output of the DLN with full-contrast images from the set of images to train parameters of the DLN using backpropagation; Fig. 1 and ¶0024, at least disclose A deep learning network is trained using multi-contrast images 100, 102, 104 acquired from scans of a multitude of subjects with a wide range of clinical indications. The images are pre-processed to perform image co-registration 106, to produce multi-contrast images 108, and data augmentation 110 to produce normalized multi-contrast image patches 112; Fig. 2 and ¶0025, at least disclose the workflow of the protocol and procedure for acquisition of images used for training. After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired), and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) represents the examination region of the examination object in a third period of time after administration of the contrast agent, wherein the second period of time follows the first period of time and the third period of time follows the second period of time (Zaharchuk- ¶0007, at least discloses the techniques of the present invention are able to predict a synthesized full-dose contrast agent image from a low-dose contrast agent image [second radiological image] and a pre-dose image [first period of time]; Fig. 1 and ¶0024, at least disclose A deep learning network is trained using multi-contrast images 100, 102, 104 acquired from scans of a multitude of subjects with a wide range of clinical indications. The images are pre-processed to perform image co-registration 106, to produce multi-contrast images 108, and data augmentation 110 to produce normalized multi-contrast image patches 112 […] Reference images 104 [synthetic image] are also processed to perform co-registration and normalization 118. These pre-processed images are then used to train a deep learning network 114, which is preferably implemented using residual learning in a convolutional neural network. The input to the deep learning network is a zero-contrast dose image 100 and low-contrast dose image 102, while the output of the network is a synthesized prediction of a full-contrast dose image 116. During training, a reference full contrast image 104 is compared with the synthesized image 116 using a loss function to train the network using error backpropagation; Fig. 2 and ¶0025, at least disclose the workflow of the protocol and procedure for acquisition of images used for training. After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired. An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ravishankar/Li to incorporate the teachings of Zaharchuk, and apply performing co-registration and normalization into Ravishankar/Li’s teachings in order wherein the at least one received image (Il, 12) comprises a first radiological image and a second radiological image, wherein the first radiological image represents the examination region of the examination object in a first period of time before or after administration of a contrast agent and the second radiological image represents the examination region of the examination object in a second period of time after administration of the contrast agent, and wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) is a synthetic radiological image, wherein each synthetic image (S1, S2, S3, SI) or the combined synthetic image (S) represents the examination region of the examination object in a third period of time after administration of the contrast agent, wherein the second period of time follows the first period of time and the third period of time follows the second period of time. Doing so would benefit to be able to reduce the dose of contrast agents generally in diagnostic imaging techniques, without sacrificing the image enhancement benefits that the contrast agents provide. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Dec 26, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month