Office Action Predictor
Last updated: April 16, 2026
Application No. 18/616,369

MEDICAL IMAGE ENHANCEMENT USING AN ARTIFICIAL INTELLIGENCE MODEL WITH EDITABLE OUTPUT IMAGE APPEARANCE CONTROL

Non-Final OA §102§103
Filed
Mar 26, 2024
Examiner
MARTELLO, EDWARD
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Ge Precision Healthcare LLC
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
543 granted / 747 resolved
+10.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
759
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 747 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5 and 9-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Litwiller et al. (U. S. Patent 11,257,191 B2, hereafter ‘191). Claims 1, 5 and 9-11 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Litwiller et al. (U. S. Patent 11,257,191 B2, hereafter ‘191). The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Regarding claim 1, Litwiller teaches a system (‘191; Abstract), comprising: a memory that stores computer-executable components (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206); and a processor that executes the computer-executable components stored in the memory (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206 - computer-executable components stored in the memory), wherein the computer-executable components comprise: an execution component (‘191; fig. 1, element 31) that generates a transformed version of a medical image via execution of an artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) on the medical image (‘191; fig. 2, element 310, blurred input image – medical image), wherein the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) comprises: a neural network (‘191; fig. 2, element 324, Deep Neural Network) that predicts values of parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of a transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function – with additional detail provided in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning) based on processing the medical image (‘191; fig. 2, element 310, blurred input image – medical image) or a down sampled version of the medical image via the neural network (‘191; fig. 2, element 324, Deep Neural Network); and a transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) that generates the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function – with additional detail provided in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning) using the values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) and applies the transformation function to the medical image (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function – with additional detail provided in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning), resulting in generation of the transformed version of the medical image (‘191; fig. 2, element 320, deblurred image); and a rendering component (‘191; fig. 1, element 31) that renders (‘191; column 20, lines 36-37, At operation 712, a deblurred medical image is generated (renders) using the second output from the deep neural network) the transformed version of the medical image on an electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device) via a graphical user interface (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206). Regarding claim 5, Litwiller teaches the system of claim 1 and further teaches wherein the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) comprises mapping information defining a mapping between input pixel intensities of respective pixels of the medical image and output pixel intensities for corresponding pixels of the transformed version(‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning), wherein the transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) generates the mapping information in accordance with the values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) and predefined relationships between the parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks), and wherein the transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) adjusts the input pixel intensities in accordance with the mapping information (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning), resulting in the generation of the transformed version of the medical image (‘191; fig. 2, element 320, deblurred image). Regarding claim 9, Litwiller teaches the system of claim 1 and further teaches wherein the computer-executable components further comprise: a training component (‘191; fig. 1, element 212, training module 212) that trains the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) based on a training dataset (‘191; column 5, lines 55-57, In some embodiments, medical image data 214 may include a plurality of training data pairs comprising pairs of blurred and sharp medical images), wherein the training dataset (‘191; column 5, lines 55-57, In some embodiments, medical image data 214 may include a plurality of training data pairs comprising pairs of blurred and sharp medical images) includes training medical images (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image – training medical images) and ground-truth transformed versions of the training medical images (‘191; column 13, lines 47-52, CNN architecture 400 may be trained by calculating a difference between a predicted deblurred medical image, and a ground truth deblurred medical image, wherein the ground truth deblurred medical image may comprise a medical image without blurring artifacts.). Regarding claim 10, Litwiller teaches the system of claim 9 and further teaches wherein the training component (‘191; fig. 1, element 212, training module 212) trains the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) using a training process (‘191; column 5, lines 31-36, In some embodiments, training module 212 includes instructions for implementing one or more gradient descent algorithms, applying one or more loss functions, and/or training routines, for use in adjusting parameters of one or more deep neural networks of deep neural network module 208 and/or acquisition parameter transforms of acquisition parameter transform module 210) that comprises, for each training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images): predicting (‘191; column 18, lines 35-47; …the deep neural network and the one or more acquisition parameter transforms may be trained in alternating phases, wherein during a first phase, parameters of the deep neural network are held fixed, while parameters of the one or more acquisition parameter transforms is adjusted based on the training data. During a second phase, parameters of the one or more acquisition parameter transforms may be held fixed while the parameters of the deep neural network are adjusted based on the training data. Alternation of training phases may continue until a threshold accuracy of prediction is met, or until the parameters of the one or more acquisition parameter transforms and the deep neural network have converged), via a neural network (‘191; fig. 2, element 324, Deep Neural Network) of the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), via the neural network (‘191; fig. 2, element 324, Deep Neural Network), training values of the parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of the transformation function based on processing the training medical image or a down sampled version of the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images) via the neural network (‘191; fig. 2, element 324, Deep Neural Network); generating, via the transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) on the medical image (‘191; fig. 2, element 310, Blurred Image), a tailored version of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) for the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images) using the training values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks); applying, via the transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), the tailored version of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) to the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images), resulting in generation of a training transformed version of the training medical image (‘191; column 13, lines 47-52, CNN architecture 400 may be trained by calculating a difference between a predicted deblurred medical image, and a ground truth deblurred medical image, wherein the ground truth deblurred medical image may comprise a medical image without blurring artifacts.); and tuning, by the system (‘191; fig. 1; element 31, Image Processing System), network parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of the neural network based on a measure of loss (‘191; column 5, lines 31-36, In some embodiments, training module 212 includes instructions for implementing one or more gradient descent algorithms, applying one or more loss functions, and/or training routines, for use in adjusting parameters of one or more deep neural networks of deep neural network module 208 and/or acquisition parameter transforms of acquisition parameter transform module 210) between the training transformed version and a corresponding ground-truth transformed version of the training medical image (‘191; column 13, lines 47-52, CNN architecture 400 may be trained by calculating a difference between a predicted deblurred medical image, and a ground truth deblurred medical image, wherein the ground truth deblurred medical image may comprise a medical image without blurring artifacts.). Regarding claim 11, Litwiller teaches the system of claim 1 and further teaches wherein the transformation function comprises a combination of two or more different transformation functions (‘191; column 5, lines 55-58, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks to receive blurred medical images and map the blurred medical image(s) to output, wherein a deblurred medical image corresponding to the blurred medical image may be produced from the output). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-4, 6-8 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Litwiller et al. (U. S. Patent 11,257,191 B2, hereafter ‘191) as applied to claims 1, 5 and 9-11 above, and in view of Liu et al. (U. S. Patent Application Publication 2019/0156526 A1, hereafter ‘526). Regarding claim 2, Litwiller teaches the system of claim 1 and further teaches wherein the rendering component renders the updated transformed version of the medical image on the electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device) via the graphical user interface (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206) but does not teach wherein the graphical user interface comprises an editing tool that facilitates receiving user input indicating an adjustment to one or more of the values that control an appearance of the transformed version, wherein in response to reception of the user input, the transformation module updates the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function, and applies the updated version of the transformation function to the medical image, resulting in generation of an updated transformed version of the medical image. Liu, working in the same field of endeavor, however, teaches the graphical user interface comprises an editing tool (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions.) that facilitates receiving user input indicating an adjustment to one or more of the values that control an appearance of the transformed version (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)), wherein in response to reception of the user input, the transformation module updates the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)), and applies the updated version of the transformation function to the medical image (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)), resulting in generation of an updated transformed version of the medical image (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display) for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for providing a graphical user interface that comprises an editing tool that facilitates receiving user input indicating an adjustment to one or more of the values that control an appearance of the transformed version, wherein in response to reception of the user input, the transformation module updates the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function, and applies the updated version of the transformation function to the medical image, resulting in generation of an updated transformed version of the medical image as taught by Liu with the methods for deblurring (enhancing) digital medical images using deep neural network technology as taught by Litwiller for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. Regarding claim 3, Litwiller and Liu teach the system of claim 2 and further teach wherein the editing tool comprises interactive parameter control information (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time. If at least one texture parameter of the texture model is edited, the display effect of a corresponding texture ball 750 may also be changed accordingly, so as to facilitate the user to evaluate the effect of editing….In some embodiments, the texture ball 750 may be directly displayed on the main interface of the user interface 700. In some embodiments, the texture ball 750 may be displayed on a secondary interface of the user interface 700, and the secondary interface may be displayed in front of the user by performing an operation on at least one interface element of the user interface 700 by, for example, opening a menu item through the mouse or long pressing an image region through the touch screen, or the like), and wherein the editing tool facilitates receiving the user input in association with adjusting the interactive parameter control information via the graphical user interface (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions). Regarding claim 4, Litwiller and Liu teach the system of claim 2 and further teach wherein the editing tool comprises an interactive graphical representation of the transformation function (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time. If at least one texture parameter of the texture model is edited, the display effect of a corresponding texture ball 750 may also be changed accordingly, so as to facilitate the user to evaluate the effect of editing….In some embodiments, the texture ball 750 may be directly displayed on the main interface of the user interface 700. In some embodiments, the texture ball 750 may be displayed on a secondary interface of the user interface 700, and the secondary interface may be displayed in front of the user by performing an operation on at least one interface element of the user interface 700 by, for example, opening a menu item through the mouse or long pressing an image region through the touch screen, or the like), and wherein the editing tool facilitates receiving the user input via the interactive graphical representation (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions). Regarding claim 6, Litwiller teaches the system of claim 5 and does not teach wherein the mapping information corresponds to a graphical look-up curve. Liu, working in the same field of endeavor, however, teaches wherein the mapping information corresponds to a graphical look-up curve (‘526; ¶ 0164, The user may obtain the texture model by fitting the at least one color parameter and its corresponding grayscale parameter. The fitting approach may be linear fitting, curve fitting, segmentation fitting, or the like. The algorithms for implementing fitting may be least squares, the Gaussian algorithm, the Ransac algorithm, the Levenberg-Marquardt algorithm, the trust-region-reflective algorithm, etc.) for the benefit of providing an adaptable customization capability to the transformation function. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for creating mapping information that corresponds to a graphical look-up curve as taught by Liu with the methods for deblurring (enhancing) digital medical images using deep neural network technology as taught by Litwiller for the benefit of providing an adaptable customization capability to the transformation function. Regarding claim 7, Litwiller and Liu teach the system of claim 5 and further teach wherein the values comprise per pixel values of the parameters for each pixel of the respective pixels (‘191; column 13, lines 1-23, image 402a is input and mapped to a first set of features. In some embodiments, blurred medical image 402a, which may comprise one or more layers corresponding to one or more features of the image (such as each intensity value of a multi-color image) may further comprise one or more concatenated acquisition parameter layers, produced by one or more acquisition parameter transforms. in some embodiments, acquisition parameter layers concatenated with blurred medical image 402a may indicate an expected/anticipated type, or intensity of blurring artifact at each pixel position of blurred medical image 402a. Blurred medical image 402a may comprise a two-dimensional (2D) or three-dimensional (3D) image/map of a patient anatomical region. In some embodiments, the input data from blurred medical image 402a is pre-processed (e.g., normalized) before being processed by the neural network. Output layer 456a may comprise an output layer of neurons, wherein each neuron may correspond to a pixel of a predicted deblurred medical image 456b (or residual), wherein output of each neuron may correspond to the predicted pixel intensity in specified location within the output deblurred medical image 456b.). Regarding claim 8, Litwiller teaches the system of claim 1 and does not explicitly teach wherein the neural network comprises a combination of a convolutional neural network encoder and regression layers and excludes a decoder neural network, wherein the transformation function comprises a pixel intensity transformation function, and wherein the transformed version comprises a pixel intensity transformed version of the medical image without artifacts as a result of the neural network excluding the decoder neural network. Liu, working in the same field of endeavor, however, teaches wherein the neural network comprises a combination of a convolutional neural network encoder and regression layers and excludes a decoder neural network (‘526; ¶ 0163; In some embodiments, the texture model may be generated by an approach of machine learning. The approach of machine learning may be in the form of supervised learning, unsupervised learning, semi-supervised learning or enhanced learning. For example, in the process of generating a texture model according to the supervised learning, a function may be learned from one or more given scan images. The function may be a possible texture model corresponding to the one or more scan image. The machine learning algorithm may include an artificial neural network, a decision tree, Gaussian process regression, a linear discriminant analysis, a nearest neighbor method, a radial basis function kernel, a support vector machine, etc.), wherein the transformation function comprises a pixel intensity transformation function (‘526; ¶ 0252; It is noted that, the present application is described by way of example with reference to an adjustment of color. However, it is understood that the principle of the present application may be applied to adjust other properties or parameters of an image/pixel/voxel, such as grayscale, brightness, contrast, saturation, hue, transparency, refractive index, reflectivity, shininess, ambient light, diffuse light, specular effect, or the like, or a combination thereof. A texture model or a set of texture models may be applied for generating the corresponding parameter(s) (or be referred to as output parameter(s)). The obtained output parameters may then be used to generate the output image) and wherein the transformed version comprises a pixel intensity transformed version of the medical image without artifacts as a result of the neural network excluding the decoder neural network (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)) for the benefit providing a medical image without artifacts as a result of the neural network excluding the decoder neural network. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for providing the neural network comprises a combination of a convolutional neural network encoder and regression layers and excludes a decoder neural network, wherein the transformation function comprises a pixel intensity transformation function, and wherein the transformed version comprises a pixel intensity transformed version of the medical image without artifacts as a result of the neural network excluding the decoder neural network as taught by Liu with the methods for deblurring (enhancing) digital medical images using deep neural network technology as taught by Litwiller for the benefit of providing a medical image without artifacts as a result of the neural network excluding the decoder neural network. Regarding claim 12, Litwiller teaches a method (‘191; Abstract), comprising: generating, by a system (‘191; fig. 1; element 31, Image Processing System) operatively coupled to a processor (‘191; fig. 1; element 204, Processor), a transformed version of a medical image (‘191; fig. 2, element 320, deblurred image) via execution of an artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) on the medical image (‘191; fig. 2, element 310, Blurred Image), wherein the artificial intelligence model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) comprises a transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function – with additional detail provided in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning); and rendering (‘191; column 20, lines 36-37, At operation 712, a deblurred medical image is generated (rendered) using the second output from the deep neural network), by the system (‘191; fig. 1; element 31, Image Processing System), the transformed version of the medical image on an electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device) via a graphical user interface (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206), and does not teach wherein the artificial intelligence transformation model comprises an editable output image appearance control functionality that enables a user to control and edit transformation operations applied to the medical image via the artificial intelligence model as controlled by the transformation function in association with viewing results of the transformation operations in real-time, the results comprising one or more updated versions of the transformed version. Liu, working in the same field of endeavor, however, teaches wherein the artificial intelligence transformation model (‘526; ¶ 0063, The image processing system 120 may use one or more algorithms to process the data or images. For example, the one or more algorithms may include Fourier transform, a fitting algorithm, a filtered backprojection, an iterative algorithm, histogram expansion calculation, image data function optimization, a level set algorithm, an image segmentation algorithm, a neural network algorithm, or the like, or a combination thereof.) comprises an editable output image appearance control functionality (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.) that enables a user to control and edit transformation operations applied to the medical image (‘526; ¶ 0203; The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions.) via the artificial intelligence model (‘526; ¶ 0063, The image processing system 120 may use one or more algorithms to process the data or images. For example, the one or more algorithms may include Fourier transform, a fitting algorithm, a filtered backprojection, an iterative algorithm, histogram expansion calculation, image data function optimization, a level set algorithm, an image segmentation algorithm, a neural network algorithm, or the like, or a combination thereof.) as controlled by the transformation function (‘526; ¶ 0063, The image processing system 120 may use one or more algorithms to process the data or images. For example, the one or more algorithms may include Fourier transform, a fitting algorithm, a filtered backprojection, an iterative algorithm, histogram expansion calculation, image data function optimization, a level set algorithm, an image segmentation algorithm, a neural network algorithm, or the like, or a combination thereof.) comprises an editable output image appearance control functionality (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.) in association with viewing results of the transformation operations in real-time (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time. If at least one texture parameter of the texture model is edited, the display effect of a corresponding texture ball 750 may also be changed accordingly, so as to facilitate the user to evaluate the effect of editing….In some embodiments, the texture ball 750 may be directly displayed on the main interface of the user interface 700. In some embodiments, the texture ball 750 may be displayed on a secondary interface of the user interface 700, and the secondary interface may be displayed in front of the user by performing an operation on at least one interface element of the user interface 700 by, for example, opening a menu item through the mouse or long pressing an image region through the touch screen, or the like), the results comprising one or more updated versions of the transformed version (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time.) for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for controlling the artificial intelligence transformation model via an editable output image appearance control functionality that enables a user to control and edit transformation operations applied to the medical image via the artificial intelligence model as controlled by the transformation function in association with viewing results of the transformation operations in real-time, the results comprising one or more updated versions of the transformed version as taught by Liu with the methods for deblurring (enhancing) digital medical images using deep neural network technology as taught by Litwiller for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. Regarding claim 13, Litwiller and Liu teach the method of claim 12 and further teach the method as further comprising: providing, by the system (‘191; fig. 1; element 31, Image Processing System) via the graphical user interface (‘191; column 20, lines 54-55), an editing tool (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions.) that facilitates receiving user input indicating an adjustment to one or more values of one or more parameters of the transformation function that control an appearance of the transformed version (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)); updating, by the system (‘191; fig. 1; element 31, Image Processing System) in response to reception of the user input, the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)); generating, by the system (‘191; fig. 1; element 31, Image Processing System), an updated transformed version of the medical image via application of the updated version of the transformation function to the medical image (‘526; ¶ 0104, The processing processes may include operations of image selection, image segmentation, image recognition, texture model association, display effect adjustment, texture parameters editing or replacement, or the like, or a combination thereof. The image may be processed to generate an output image. The output image may be output through the input/output module 310, or stored in a storage module (not shown in FIG. 3) of the image processing system 120. The output image may be sent to the user interface 160 or the human interface device 130 for display.)); and rendering, by the system (‘191; fig. 1; element 31, Image Processing System), the updated transformed version of the medical image on the electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device) via the graphical user interface (‘191; column 6, lines 13-20, Display device 33 may be combined with processor 204, non-transitory memory 206, and/or user input device 32 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view medical images, and/or interact with various data stored in non-transitory memory 206). Regarding claim 14, Litwiller and Liu teach the method of claim 13 and further teach wherein the editing tool comprises interactive parameter control information (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time. If at least one texture parameter of the texture model is edited, the display effect of a corresponding texture ball 750 may also be changed accordingly, so as to facilitate the user to evaluate the effect of editing….In some embodiments, the texture ball 750 may be directly displayed on the main interface of the user interface 700. In some embodiments, the texture ball 750 may be displayed on a secondary interface of the user interface 700, and the secondary interface may be displayed in front of the user by performing an operation on at least one interface element of the user interface 700 by, for example, opening a menu item through the mouse or long pressing an image region through the touch screen, or the like), and wherein the editing tool facilitates receiving the user input in association with adjusting the interactive parameter control information via the graphical user interface (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions). Regarding claim 15, Litwiller and Liu teach the method of claim 12 and further teach wherein generating the transformed version of the medical image comprises: predicting (‘191; column 18, lines 35-47; …the deep neural network and the one or more acquisition parameter transforms may be trained in alternating phases, wherein during a first phase, parameters of the deep neural network are held fixed, while parameters of the one or more acquisition parameter transforms is adjusted based on the training data. During a second phase, parameters of the one or more acquisition parameter transforms may be held fixed while the parameters of the deep neural network are adjusted based on the training data. Alternation of training phases may continue until a threshold accuracy of prediction is met, or until the parameters of the one or more acquisition parameter transforms and the deep neural network have converged), via a neural network (‘191; fig. 2, element 324, Deep Neural Network) of the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), via a neural network (‘191; fig. 2, element 324, Deep Neural Network) of the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), values of parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning) based on processing the medical image or a down sampled version of the medical image via the neural network (‘191; fig. 2, element 324, Deep Neural Network); generating, via a transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) of the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), the transformation function using the values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks); and applying, via the transformation module (‘191; fig. 2, element 324, Deep Neural Network), the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) to the medical image (‘191; fig. 2, element 310, Blurred Image), resulting in generation of the transformed version of the medical image (‘191; fig. 2, element 320, deblurred image). Regarding claim 16, Litwiller and Liu teach the method of claim 15 and further teach wherein the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) comprises mapping information defining a mapping between input pixel intensities of respective pixels of the medical image and output pixel intensities for corresponding pixels of the transformed version (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning), and wherein generating the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) comprises generating the mapping information in accordance with the values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) and predefined relationships between the parameters (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning). Regarding claim 17, Litwiller and Liu teach the method of claim 16 and further teach wherein the applying comprises adjusting the input pixel intensities in accordance with the mapping information (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – mapping transformation function – with additional detail in column 8, lines 31-38, Deep neural network 324 comprises learned convolutional filters 314, and learned deconvolutional filters 318. Deep neural network 324 may further comprise one or more densely connected layers (not shown), and one or more pooling layers (not shown), one or more up sampling layers (not shown), and one or more ReLU layers (not shown), or any layers conventional in the art of machine learning), and wherein the mapping information corresponds to a graphical look-up curve (‘526; ¶ 0164, The user may obtain the texture model by fitting the at least one color parameter and its corresponding grayscale parameter. The fitting approach may be linear fitting, curve fitting, segmentation fitting, or the like. The algorithms for implementing fitting may be least squares, the Gaussian algorithm, the Ransac algorithm, the Levenberg-Marquardt algorithm, the trust-region-reflective algorithm, etc.). Regarding claim 18, Litwiller and Liu teach the method of claim 12 and further teach the method as further comprising: training, by the system (‘191; fig. 1; element 31, Image Processing System), the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) based on a training dataset (‘191; column 5, lines 55-57, In some embodiments, medical image data 214 may include a plurality of training data pairs comprising pairs of blurred and sharp medical images), wherein the training dataset (‘191; column 5, lines 55-57, In some embodiments, medical image data 214 may include a plurality of training data pairs comprising pairs of blurred and sharp medical images) includes training medical images (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images) and ground-truth transformed versions of the training medical images (‘191; column 13, lines 47-52, CNN architecture 400 may be trained by calculating a difference between a predicted deblurred medical image, and a ground truth deblurred medical image, wherein the ground truth deblurred medical image may comprise a medical image without blurring artifacts.). Regarding claim 19, Litwiller and Liu teach the method of claim 18 and further teach wherein the training comprises, for each training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images): predicting (‘191; column 18, lines 35-47; …the deep neural network and the one or more acquisition parameter transforms may be trained in alternating phases, wherein during a first phase, parameters of the deep neural network are held fixed, while parameters of the one or more acquisition parameter transforms is adjusted based on the training data. During a second phase, parameters of the one or more acquisition parameter transforms may be held fixed while the parameters of the deep neural network are adjusted based on the training data. Alternation of training phases may continue until a threshold accuracy of prediction is met, or until the parameters of the one or more acquisition parameter transforms and the deep neural network have converged), via a neural network (‘191; fig. 2, element 324, Deep Neural Network) of the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), training values of the parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) based on processing the training medical image or a down sampled version of the training medical image (‘191; column 18, lines 9-23, The deep neural network(s) may be trained by using a plurality pairs of blurred medical images and corresponding sharp (or pristine) images. In some embodiments, in a sharp-blurred medical image pair, the blurred medical image is reconstructed from the acquired raw data by a medical device while the sharp image is obtained by processing the blurred image through known denoising/deblurring methods or any combination thereof. In some embodiments, in a sharp-blurred medical image pair, the sharp and blurred images are acquired for the same anatomical region but with different acquisition parameters. The blurred images are used as input to the deep neural network and the sharp images are used as the ground truth for reference) via the neural network (‘191; fig. 2, element 324, Deep Neural Network); generating, via the transformation module (‘191; fig. 1, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), column 5, lines 43-46, a tailored version of the transformation function for the training medical image using the training values; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image), a tailored version of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) for the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images) using the training values (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) applying, via the transformation module (‘191; fig. 1, element 208, element 208; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model), the tailored version of the transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function) to the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images), resulting in generation of a training transformed version of the training medical image (‘191; column 5, lines 38-46; In some embodiments, training module 212 includes instructions for generating training data pairs by applying/adding one or more blurring artifacts to sharp medical images to produce a blurred medical image - training medical images); and tuning, by the system (‘191; fig. 1; element 31, Image Processing System), network parameters (‘191; column 4, lines 55-60, Deep neural network module 208 may include one or more deep neural networks, comprising a plurality of parameters (including weights, biases, activation functions), and instructions for implementing the one or more deep neural networks) of the neural network based on a measure of loss (‘191; column 5, lines 31-36, In some embodiments, training module 212 includes instructions for implementing one or more gradient descent algorithms, applying one or more loss functions, and/or training routines, for use in adjusting parameters of one or more deep neural networks of deep neural network module 208 and/or acquisition parameter transforms of acquisition parameter transform module 210) between the training transformed version and a corresponding ground-truth transformed version of the training medical image (‘191; column 13, lines 47-52, CNN architecture 400 may be trained by calculating a difference between a predicted deblurred medical image, and a ground truth deblurred medical image, wherein the ground truth deblurred medical image may comprise a medical image without blurring artifacts.). Regarding claim 20, Litwiller teaches a non-transitory machine-readable storage medium (‘191; fig. 1, element 206, non-transitory memory), comprising executable instructions that, when executed by a processor (‘191; Image processing system 31 includes a processor 204 configured to execute machine readable instructions stored in non-transitory memory 206), facilitate performance of operations, comprising: generating a transformed version of a medical image (‘191; fig. 2, element 320, deblurred image) via execution of an artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) on the medical image (‘191; fig. 2, element 310, blurred input image – medical image), wherein the artificial intelligence transformation model (‘191; column 5, lines 48-50, The deep neural network module 208 includes trained and validated network(s) - artificial intelligence transformation model) comprises a transformation function (‘191; column 8, lines 26-31, Input layers 322, comprising acquisition layers 306, parameter maps 308, and blurred image 310, may be propagated through the plurality of layers within deep neural network 324, to map intensity values of blurred image 310 to intensity values of deblurred image 320 – the described mapping process is the transformation function); rendering (‘191; column 20, lines 36-37, At operation 712, a deblurred medical image is generated (rendered) using the second output from the deep neural network), by the system (‘191; fig. 1; element 31, Image Processing System), the transformed version of the medical image on an electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device); and rendering the updated transformed version of the medical image on the electronic display (‘191; column 20, lines 54-55; At operation 714, the image processing system displays the deblurred medical image – renders for display - via a display device); and does not teach providing, via a graphical user interface rendered on the electronic display, an editing tool that facilitates receiving user input indicating an adjustment to one or more values of one or more parameters, and instructions for implementing the one or more deep neural networks of the transformation function that control an appearance of the transformed version; in response to reception of the user input, updating the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function; generating an updated transformed version of the medical image via application of the updated version of the transformation function to the medical image. Liu, working in the same field of endeavor, however, teaches providing, via a graphical user interface rendered on the electronic display, an editing tool that facilitates receiving user input indicating an adjustment to one or more values of one or more parameters (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time. If at least one texture parameter of the texture model is edited, the display effect of a corresponding texture ball 750 may also be changed accordingly, so as to facilitate the user to evaluate the effect of editing….In some embodiments, the texture ball 750 may be directly displayed on the main interface of the user interface 700. In some embodiments, the texture ball 750 may be displayed on a secondary interface of the user interface 700, and the secondary interface may be displayed in front of the user by performing an operation on at least one interface element of the user interface 700 by, for example, opening a menu item through the mouse or long pressing an image region through the touch screen, or the like) (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions.), and in response to reception of the user input, updating the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function (‘526; figs. 7A-7B; ¶ 0203, The functional section 704 (e.g., the functional section 704-1 and the functional section 704-2) may be an interface including at least one functional interface element. A functional interface element may correspond to one or more functions of the image processing system 120. The functional interface element may include a text or a pattern describing its function. The functional interface element may be at least one of a text box 714, a button 712, a slider 715, a selection box 716, or the like. The text box 714 may be used to display or input at least one parameter (e.g., one image segmentation parameter, one texture parameter). The button 712 may be used to confirm the execution of user operations (e.g., image selection) or functions (e.g., image segmentation, image recognition and texture model association). For example, the user may click on the button 712 through the human interface device 130 to confirm the execution of the image segmentation operation, or the like. The slider 712 may be applied to the adjustment of one or more parameter values. For example, the user may visually change the grayscale parameter value of the image through dragging the slider 715. The selection box 716 may be used to control the execution of the operation predetermined by the system. For example, the user may select whether to execute an operation of adding a reflection effect to a texture model through clicking on the selection box 716. The functional section 704 may include other types of functional interface elements. The user's operation on the functional interface elements may be transformed into user instructions.); generating an updated transformed version of the medical image via application of the updated version of the transformation function to the medical image (‘526; 0079; ¶ 0211; In some embodiments, the display effect of the texture ball 750 may be generated according to the corresponding texture model in real time.) for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for providing a graphical user interface that comprises an editing tool that facilitates receiving user input indicating an adjustment to one or more of the values that control an appearance of the transformed version, wherein in response to reception of the user input, the transformation module updates the transformation function in accordance with the adjustment, resulting in an updated version of the transformation function, and applies the updated version of the transformation function to the medical image, resulting in generation of an updated transformed version of the medical image as taught by Liu with the methods for deblurring (enhancing) digital medical images using deep neural network technology as taught by Litwiller for the benefit of improving the user’s image adjustment efficiency for editing/modifying image display effects and improving their overall interaction experience. Conclusion The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure: US 11557036 B2 Method and System for Image Registration Using an Intelligent Artificial Agent – Methods and systems for image registration using an intelligent artificial agent are disclosed. In an intelligent artificial agent based registration method, a current state observation of an artificial agent is determined based on the medical images to be registered and current transformation parameters. Action-values are calculated for a plurality of actions available to the artificial agent based on the current state observation using a machine learning based model, such as a trained deep neural network (DNN). The actions correspond to predetermined adjustments of the transformation parameters. An action having a highest action-value is selected from the plurality of actions and the transformation parameters are adjusted by the predetermined adjustment corresponding to the selected action. The determining, calculating, and selecting steps are repeated for a plurality of iterations, and the medical images are registered using final transformation parameters resulting from the plurality of iterations. US 20210225047 A1 Method and System of Motion Correction For Magnetic Resonance Imaging – A method and system for reducing or removing motion artefacts in magnetic resonance (MR) images, the method including the steps of: receiving a motion corrupted MR image; determining a corrected intensity value for each pixel in the motion corrupted MR image by using a neural network; and generating a motion corrected MR image based on the determined corrected intensity values for the pixels in the motion corrupted MR image. In some embodiments, the last layer of the CNN may be a regression layer that outputs a corrected intensity value for each pixel in the motion corrupted MR image. For a regression network, the loss function used in the training stage may be mean squared error, or any other suitable loss function, such as mean absolute error or mean percentage error. Further, in some other embodiments, the CNN used in the motion correction module 130 may not be an encoder-decoder CNN, but any other suitable type of image to image mapping CNN. US 20220130084 A1 Systems and Methods for Medical Image Processing Using Deep Neural Network – Methods and systems are provided for processing medical images using deep neural networks. In one embodiment, a medical image processing method comprises receiving a first medical image having a first characteristic and one or more acquisition parameters corresponding to acquisition of the first medical image, incorporating the one or more acquisition parameters into a trained deep neural network, and mapping, by the trained deep neural network, the first medical image to a second medical image having a second characteristic. The deep neural network may thereby receive at least partial information regarding the type, extent, and/or spatial distribution of the first characteristic in a first medical image, enabling the trained deep neural network to selectively convert the received first medical image. US 20230071535 A1 Learning-Based Domain Transformation for Medical Images – Systems/techniques that facilitate learning-based domain transformation for medical images are provided. In various embodiments, a system can access a medical image. In various aspects, the medical image can depict an anatomical structure according to a first medical scanning domain. In various instances, the system can generate, via execution of a machine learning model, a predicted image based on the medical image. In various aspects, the predicted image can depict the anatomical structure according to a second medical scanning domain that is different from the first medical scanning domain. In some cases, the first and second medical scanning domains can be first and second energy levels of a computed tomography (CT) scanning modality. In other cases, the first and second medical scanning domains can be first and second contrast phases of the CT scanning modality. Zhang et al. Learning Fully Convolutional Networks for Iterative Non-blind Deconvolution - In this paper, we propose a fully convolutional network for iterative non-blind deconvolution. We decompose the non-blind deconvolution problem into image denoising and image deconvolution. We train a FCNN to remove noise in the gradient domain and use the learned gradients to guide the image deconvolution step. In contrast to the existing deep neural network based methods, we iteratively deconvolve the blurred images in a multi-stage framework. The proposed method is able to learn an adaptive image prior, which keeps both local (details) and global (structures) information. Both quantitative and qualitative evaluations on the benchmark datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of quality and speed. Valsesia et al. Deep Graph-Convolutional Image Denoising - Non-local self-similarity is well-known to be an effective prior for the image denoising problem. However, little work has been done to incorporate it in convolutional neural networks, which surpass non-local model-based methods despite only exploiting local information. In this paper, we propose a novel end-to-end trainable neural network architecture employing layers based on graph convolution operations, thereby creating neurons with non-local receptive fields. The graph convolution operation generalizes the classic convolution to arbitrary graphs. In this work, the graph is dynamically computed from similarities among the hidden features of the network, so that the powerful representation learning capabilities of the network are exploited to uncover self-similar patterns. We introduce a lightweight Edge-Conditioned Convolution which addresses vanishing gradient and over-parameterization issues of this particular graph convolution. Extensive experiments show state-of-the-art performance with improved qualitative and quantitative results on both synthetic Gaussian noise and real noise. Fig. 3 shows the GCDN architecture where the preprocessing stage on the right side (input) of the network is equivalent to the claimed encoder of claim 8 of the instant application. Following through the remainder of the network, one will not find a decoder or its equivalent, satisfying the last limitation of claim 8. Lefkimmiatis Universal Denoising Networks: A Novel CNN Architecture for Image Denoising - We design a novel network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising. Based on the proposed architecture, we introduce two different variants. The first network involves convolutional layers as a core component, while the second one relies instead on non-local filtering layers and thus it is able to exploit the inherent non-local self-similarity property of natural images. As opposed to most of the existing deep network approaches, which require the training of a specific model for each considered noise level, the proposed models are able to handle a wide range of noise levels using a single set of learned parameters, while they are very robust when the noise degrading the latent image does not match the statistics of the noise used during training. The latter argument is supported by results that we report on publicly available images corrupted by unknown noise and which we compare against solutions obtained by competing methods. At the same time the introduced networks achieve excellent results under additive white Gaussian noise (AWGN), which are comparable to those of the current state-of-the-art network, while they depend on a more shallow architecture with the number of trained parameters being one order of magnitude smaller. These properties make the proposed networks ideal candidates to serve as sub-solvers on restoration methods that deal with general inverse imaging problems such as deblurring, demosaicking, superresolution, etc. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD MARTELLO/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Dec 23, 2025
Non-Final Rejection — §102, §103
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 12, 2026
Examiner Interview Summary
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602839
Systems And Methods For Retrieving Information Associated With Contents Of A Container Using Augmented Reality
2y 5m to grant Granted Apr 14, 2026
Patent 12602868
REMOTE REPRODUCTION METHOD, SYSTEM, AND APPARATUS, DEVICE, MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Apr 14, 2026
Patent 12578909
OPTICAL LINK SUPPORTING DISPLAY PORT
2y 5m to grant Granted Mar 17, 2026
Patent 12573067
SHAPE AND POSE ESTIMATION FOR OBJECT PLACEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12567192
COMPUTER-IMPLEMENTED METHOD FOR CONTROLLING A VIRTUAL AVATAR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
92%
With Interview (+19.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 747 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month