Prosecution Insights
Last updated: April 18, 2026
Application No. 17/974,298

METHOD FOR GENERATING MAGNETIC RESONANCE IMAGE AND MAGNETIC RESONANCE IMAGING SYSTEM

Final Rejection §103
Filed
Oct 26, 2022
Examiner
BONANSINGA, AARON TIMOTHY
Art Unit
2673
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Response to Arguments Applicant’s arguments (see remarks below), filed 02/19/2026, with respect to claims 1-9 and 11-15 have been fully considered but are respectfully unpersuasive. Rejections Under 35 U.S.C. §103 On page 6, the Applicant argues “Hilbert and Xing, taken alone or in hypothetical combination, fail to teach or suggest "simultaneously generating a plurality of quantitative maps on the basis of a single raw image", as recited in independent claim 1 and similarly recited in independent claim 11”. In response, the Office finds this argument unpersuasive. Based on the breadth of the claim language, the prior art by XING et al. (US 20210313046 A1) explicitly teaches simultaneously generating a plurality of quantitative maps on the basis of a single raw image, the single raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters (Fig. 1B. Paragraph [0025]-XING discloses for T.sub.1 mapping, the method uses a deep neural network 108 to derive quantitative T.sub.1 and proton density maps from a single conventional T.sub.1 weighted image 106 acquired in routine clinical practice, as illustrated in FIG. 1B. With the use of the deep neural network, only one T.sub.1 weighted image 106 is required for the generation of a quantitative T.sub.1 map. Further in paragraph [0026]-XING discloses a T.sub.2 map can be produced from a single T.sub.2 or T.sub.2/T.sub.1 weighted image using a trained deep neural network. In this way, qualitative and quantitative MR images can be obtained in the routine clinical practice without changing the imaging protocol or performing multiple scans. In paragraph [0036]-XING discloses generative adversarial networks with various architectures may be used (wherein a single qualitative MRI image is obtained by executing a magnetic resonance scan sequence and used to generate multiple quantitative maps through the application of a trained machine learning model)). On page 7, the Applicant argues “Xing is completely silent with regard to simultaneously generating multiple quantitative maps based on a single raw image. A quantitative T1 weighted image is not the equivalent of a raw MR image. In sharp contrast, a quantitative T1 weighted image is a highly processed, derived image. In particular, to obtain a weighted image an inverse Fourier Transform has to be applied to raw data. Similarly, processing is involved in generating a quantitative image. A quantitative weighted MR image is a post-processed map or synthetic image, not raw data.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. Furthermore, the prior art by XING et al. (US 20210313046 A1) is respectfully not silent with regard to simultaneously generating multiple quantitative maps based on a single raw image. XING trains and implements a machine learning model to generate multiple quantitative maps from a single MRI image. During implementation, a single qualitative MRI image is obtained without post-processing by executing a magnetic resonance scan sequence. Please see paragraph [0023, 0025-0026, 0031-0032, 0040]. On page 7, the Applicant argues “Applicant respectfully submits that Hilbert and Xing, taken alone or in hypothetical combination, do not teach or suggest all of the recitations of independent claims 1 and 11, and thus cannot support a prima facie case of obviousness with respect to these claims.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. On page 7, the Applicant argues “Applicant respectfully submits that Hilbert and Xing, taken alone or in hypothetical combination, do not teach or suggest all of the recitations of independent claims 1 and 11, and thus cannot support a prima facie case of obviousness with respect to these claims.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. On page 7, the Applicant argues “Accordingly, Applicant respectfully requests withdrawal of the rejection of claims 1, 2, 6, 7, 9, 11, 12, and 15 under 35 U.S.C. §103 and allowance of the same.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. On page 7, the Applicant argues “Dependent Claims 3-5, 8, and 13-14 Claims 3-5 and 8 depend from claim 1, and claims 13 and 14 depend from claim 11…The other cited references do not cure the deficiencies of Hilbert and Xing. Therefore, a prima facie case of obviousness under 35 U.S.C. § 103 is not established as to claims 3-5, 8, and 13-14. Applicant thus respectfully requests withdrawal of the rejection of claims 3-5, 8, and 13-14 and allowance of the same.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. On page 8, the Applicant argues “In view of the remarks set forth above, Applicant respectfully requests withdrawal of the rejection and allowance of the pending claims.”. In response, the Office finds this argument unpersuasive for the reasons stated above and below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6-7, 9, 11-12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over HILBERT et al. (US 20200333414 A1), hereinafter referenced as HILBERT in view of XING et al. (US 20210313046 A1), hereinafter referenced as XING. Regarding claim 1, HILBERT explicitly teaches a method for generating a magnetic resonance image (Fig. 1. Paragraph [0025]-HILBERT discloses the present invention proposes to use, for the generation of quantitative maps, a quantitative acquisition strategy which measures quantitative parameters in a way that additional contrast information is sampled together with the quantitative parameters for generating a corresponding synthetic (i.e. simulated) image based on physical signal models), comprising: performing image conversion on the plurality of quantitative maps on the basis of the plurality of scan parameters (Fig. 2. Paragraph [0027]-HILBERT discloses at step 101, the system uses a first quantitative MRI acquisition technique. From the use of the first quantitative acquisition technique, the system generates a first quantitative map for the first quantitative parameter, e.g. a T1 map. In paragraph [0029]-HILBERT discloses at step 102, the system uses a second quantitative MRI acquisition technique, preferentially a T2 mapping acquisition technique the acquisition of the second quantitative map (wherein the first acquisition technique also generates at the same time a quantitative proton-density map with additional weighting and a quantitative map free of the additional weighting, and the second acquisition technique generates a second proton-density weighted image or quantitative map)) to generate a first converted image and a second converted image (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first quantitative proton-density map, and d) the second quantitative proton-density map, as inputs in a contrast synthetization module which contains a physical signal model given by Eq. (2), wherein the inputs are used to generate a synthetic image M with arbitrary TE, TR and TI. The system contains a user interface with a contrast switch enabling to automatically switch between the first contrast component, the second contrast component and the initial contrast component (i.e. no contrast—initial image) when displaying, at step 104, a synthetic image of the biological object. The system is configured for displaying on a display at least two different contrasts at the same time for the biological object. The fat signal or MT-weighting could be turned on and off by switching between M0.sub.P, M0.sub.W, and M0.sub.M when using equation (4) for the physical signal model used to generate the synthetic image M); generating a fused image of the first converted image and the second converted image (Fig. 1. Paragraph [0031]-HILBERT discloses at least 3 synthetic images M might be displayed by the system, either at the same time, or by switching from one of the synthetic images to the other one by selecting the appropriate initial magnetization M0.sub.P, M0.sub.W, or M0.sub.M via the contrast switch. Further in paragraph [0034]-HILBERT discloses the obtained maps and images are used as input in a contrast synthetization module 24 of the processing unit 203 (wherein the contrast synthetization module 24 contains a physical signal model (contrast mechanism) as shown in Eq. 4 configured for generating a synthetic image M of the biological object from said inputs). The contrast switch is preferentially configured for enabling a switch between a first synthetic image generated by using the contrast component C.sub.i, with i≥1, and a second synthetic image generated by using the contrast component C.sub.0, in order to switch on/off the corresponding contrast); and generating a plurality of quantitative weighted images on the basis of the fused image (Fig. 1. Paragraph [0030]-HILBERT discloses the system is configured for displaying on a display at least two different contrasts at the same time for the biological object. Further in paragraph [0034]-HILBERT discloses the user interface 205 is further configured for enabling a user to choose the desired synthetic sequence parameters TE, TR, TI. By means of the user interface 25 and its contrast switch, a user may choose to display any of the weighted contrast on a map of the biological object shown then on the display 204 of the system 200, like T2 weighted image, T2 weighted image WE, T1 weighted image, T1 weighted image WE, PD image, PD WE image, or STIR image. A user may switch between different types of preparation contrast by turning the fat signal and a MT-weighting in synthetic contrasts “on” or “off”). HILBERT fails to explicitly teach simultaneously generating a plurality of quantitative maps on the basis of a single raw image, the single raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters. However, XING explicitly teaches simultaneously generating a plurality of quantitative maps on the basis of a single raw image, the single raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters (Fig. 1B. Paragraph [0025]-XING discloses for T.sub.1 mapping, the method uses a deep neural network 108 to derive quantitative T.sub.1 and proton density maps from a single conventional T.sub.1 weighted image 106 acquired in routine clinical practice, as illustrated in FIG. 1B. With the use of the deep neural network, only one T.sub.1 weighted image 106 is required for the generation of a quantitative T.sub.1 map. Further in paragraph [0026]-XING discloses a T.sub.2 map can be produced from a single T.sub.2 or T.sub.2/T.sub.1 weighted image using a trained deep neural network. In this way, qualitative and quantitative MR images can be obtained in the routine clinical practice without changing the imaging protocol or performing multiple scans. In paragraph [0036]-XING discloses generative adversarial networks with various architectures may be used); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT of having a method for generating a magnetic resonance image, with the teachings of XING of having simultaneously generating a plurality of quantitative maps on the basis of a single raw image, the single raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters. Wherein HILBERT’s method having simultaneously generating a plurality of quantitative maps on the basis of a single raw image, the single raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters. The motivation behind the modification would have been to obtain a method that improves the generation of synthetic quantitative MRI images and the performance of neural networks, since both HILBERT and XING concern processing quantitative MRI images and mappings. Wherein HILBERT systems and methods improves the ability for radiologists to form diagnoses and improves the generation of synthetic images based on quantitative maps by using additional weightings and providing a large variety of contrasts based on short acquisition times on top of the quantitative information, while XING’s systems and methods improves the accuracy and efficiency for generating quantitative maps, requires a single image as the initial input and implements a neural network architecture that achieves a balance between computational workload and performance. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015 and 0025] and XING et al. (US 20210313046 A1), Abstract and paragraph [0030 and 0039-0040]. Regarding claim 2, HILBERT in view of XING explicitly teach the method according to claim 1, HILBERT fail to explicitly teach wherein the generating a plurality of quantitative maps on the basis of the single raw image comprises: generating a plurality of quantitative maps by performing deep learning processing on the single raw image on the basis of a first deep learning network, wherein the plurality of quantitative maps comprise at least one of a quantitative T1 value, a quantitative T2 value, and a quantitative proton density value. However, XING explicitly teaches wherein the generating a plurality of quantitative maps on the basis of the single raw image (Fig. 1B. Paragraph [0024]-XING discloses the inventors have discovered that deep learning enables the acquisition and utilization of generic a priori information to predict quantitative MRI data from a single qualitative MR image) comprises: generating a plurality of quantitative maps by performing deep learning processing (Fig. 1B. Paragraph [0025]-XING discloses for T.sub.1 mapping, the method uses a deep neural network 108 to derive quantitative T.sub.1 and proton density maps from a single conventional T.sub.1 weighted image 106 acquired in routine clinical practice, as illustrated in FIG. 1B. With the use of the deep neural network, only one T.sub.1 weighted image 106 is required for the generation of a quantitative T.sub.1 map. Further in paragraph [0026]-XING discloses a T.sub.2 map can be produced from a single T.sub.2 or T.sub.2/T.sub.1 weighted image using a trained deep neural network. In this way, qualitative and quantitative MR images can be obtained in the routine clinical practice without changing the imaging protocol or performing multiple scans) on the single raw image on the basis of a first deep learning network (Fig. 1B, #108 called a Deep learning network. Paragraph [0036]. Further in paragraph [0040]-XING discloses generative adversarial networks with various architectures may be used), wherein the plurality of quantitative maps comprise at least one of a quantitative T1 value (Fig. 5A-E. Paragraph [0019]-XING discloses FIG. 5A-E show images illustrating prediction of quantitative T.sub.1 map from a T.sub.1 weighted image using T-net), a quantitative T2 value (Fig. 7A-E. Paragraph [0021]-XING discloses FIG. 7A-E shows images illustrating prediction of quantitative T.sub.2 map from a single T.sub.2/T.sub.1 weighted image using T-net), and a quantitative proton density value (Fig. 6A-E. Paragraph [0020]-XING discloses FIG. 6A-E show images illustrating prediction of quantitative proton density (PD) map from a T.sub.1 weighted image using T-net). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a method for generating a magnetic resonance image, with the teachings of XING of having wherein the generating a plurality of quantitative maps on the basis of the single raw image comprises: generating a plurality of quantitative maps by performing deep learning processing on the single raw image on the basis of a first deep learning network, wherein the plurality of quantitative maps comprise at least one of a quantitative T1 value, a quantitative T2 value, and a quantitative proton density value. Wherein HILBERT’s method having wherein the generating a plurality of quantitative maps on the basis of the single raw image comprises: generating a plurality of quantitative maps by performing deep learning processing on the single raw image on the basis of a first deep learning network, wherein the plurality of quantitative maps comprise at least one of a quantitative T1 value, a quantitative T2 value, and a quantitative proton density value. The motivation behind the modification would have been to obtain a method that improves the generation of synthetic quantitative MRI images and the performance of neural networks, since both HILBERT and XING concern processing quantitative MRI images and mappings. Wherein HILBERT systems and methods improves the ability for radiologists to form diagnoses and improves the generation of synthetic images based on quantitative maps by using additional weightings and providing a large variety of contrasts based on short acquisition times on top of the quantitative information, while XING’s systems and methods improves the accuracy and efficiency for generating quantitative maps, requires a single image as the initial input and implements a neural network architecture that achieves a balance between computational workload and performance. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015 and 0025] and XING et al. (US 20210313046 A1), Abstract and paragraph [0030 and 0039-0040]. Regarding claim 6, HILBERT in view of XING explicitly teach the method according to claim 1, HILBERT further explicitly teaches wherein the plurality of scan parameters comprise echo time, repetition time, and inversion recovery time (Fig. 2. Paragraph [0005]-HILBERT discloses the image contrast of the synthetic image S, usually called “synthetic contrast”, depends on sequence parameters (inversion time TI, repetition time TR, echo time TE) and tissue properties (longitudinal relaxation T1, transverse relaxation T2, and initial magnetization M0). In paragraph [0013]-HILBERT discloses the quantitative maps Q1=T1, Q2=T2 and the sequence parameters P1=TI, P2=TR, P3=TE can be used to create synthetic maps in conjunction with the known contrast mechanisms of relaxation (i.e. T1 and T2). Please also read paragraph [0030 and 0034]). Regarding claim 7, HILBERT in view of XING and in further view of BRADY-KALNAY explicitly teaches the method according to claim 6, HILBERT further explicitly teaches wherein the performing image conversion on the plurality of quantitative maps on the basis of the plurality of scan parameters to generate a first converted image and a second converted image comprises: generating the first converted image (Fig. 2. Paragraph [0027]-HILBERT discloses at step 101, the system uses a first quantitative MRI acquisition technique, which is preferentially a T1 mapping acquisition technique. Further in paragraph [0028]-HILBERT discloses the first quantitative MRI acquisition technique enables at the same time to generate at least a first contrast component which is for instance a first contrast-weighted image for the biological object (e.g. a first proton-density image or quantitative proton-density map) with additional weighting and optionally an initial contrast component, which is for instance an initial image (e.g. a proton-density image or map) free of the additional weighting (wherein the first contrast component image is a proton-density image M0.sub.P). Please also read paragraph [0033]) on the basis of a first formula, the first formula having the echo time and the plurality of quantitative maps as variables (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first contrast component, e.g. the proton-density image M0.sub.P with fat signal, d) the second contrast component, e.g. the proton-density image with additional magnetization transfer contrast M0.sub.M, and e) the initial contrast component, which is preferentially the proton-density image or quantitative proton-density map, e.g. M0.sub.W, as inputs in a contrast synthetization module which contains a physical signal model given by Equation (2), wherein the inputs are used to generate a synthetic image M with arbitrary TE, TR and TI. Optionally, the system is configured for displaying on a display at least two different contrasts at the same time for the biological object. The fat signal or MT-weighting could be turned on and off by switching between M0.sub.P, M0.sub.W, and M0.sub.M when using the following equation for the physical signal model used to generate the synthetic image M by means of the system according to the invention: (wherein TR, TI and TE in both the equation below and equation (2) represent repetition time, inversion time and echo time, respectively): PNG media_image1.png 166 1108 media_image1.png Greyscale Please also read paragraph [0004-0005 and 0033]); and generating the second converted image (Fig. 2. Paragraph [0029]-HILBERT discloses at step 102, the system uses a second quantitative MRI acquisition technique, preferentially a T2 mapping acquisition technique, configured for measuring a value for a second quantitative parameter, e.g. T2, for the biological object. The acquisition of the second quantitative map, e.g. T2 map, is also used for acquiring a second contrast component which is a second contrast-weighed image or quantitative map with an additional weighting different from the additional weighting of the first contrast-weighted image or map (wherein the second contrast component image is a proton-density image M0.sub.M). Please also read paragraph [0033])) on the basis of a second formula, the second formula having the echo time, the repetition time, the inversion recovery time, and the plurality of quantitative maps as variables (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first contrast component, e.g. the proton-density image M0.sub.P with fat signal, d) the second contrast component, e.g. the proton-density image with additional magnetization transfer contrast M0.sub.M, and e) the initial contrast component, which is preferentially the proton-density image or quantitative proton-density map, e.g. M0.sub.W, as inputs in a contrast synthetization module which contains a physical signal model given by Equation (2), wherein the inputs are used to generate a synthetic image M with arbitrary TE, TR and TI. Optionally, the system is configured for displaying on a display at least two different contrasts at the same time for the biological object. The fat signal or MT-weighting could be turned on and off by switching between M0.sub.P, M0.sub.W, and M0.sub.M when using the following equation for the physical signal model used to generate the synthetic image M by means of the system according to the invention: (wherein TR, TI and TE in both the equation below and equation (2) represent repetition time, inversion time and echo time, respectively): PNG media_image2.png 135 1169 media_image2.png Greyscale Please also read paragraph [0004-0005 and 0033]). Regarding claim 9, HILBERT in view of XING explicitly teach the method according to claim 1, HILBERT further teaches wherein the single raw image is obtained by executing a synthesized magnetic resonance scan sequence (Fig. 2. Paragraph [0034]-HILBERT discloses the contrast switch is configured for enabling a switch between a first synthetic image generated by using the contrast component C.sub.i, with i≥1, and a second synthetic image generated by using the contrast component C.sub.0, in order to switch on/off the corresponding contrast. The user interface 205 is further configured for enabling a user to choose the desired synthetic sequence parameters TE, TR, TI. A user may choose to display any of the weighted contrast on a map of the biological object shown then on the display 204 of the system 200, like T2 weighted image, T2 weighted image WE, T1 weighted image, T1 weighted image WE, PD image, PD WE image, or STIR image. The invention enables a user to switch between different types of preparation contrast in the example by turning the fat signal and a MT-weighting in synthetic contrasts “on” or “off”). Regarding claim 11, HILBERT explicitly teaches a magnetic resonance imaging system (Fig. 2, #200 called a system. Paragraph [0032]-HILBERT discloses a system 200 for generating synthetic images with switchable image contrasts for a biological object, like a brain), comprising: a scanner, configured to execute a magnetic resonance scan sequence to generate a single raw image, the magnetic resonance scan sequence having a plurality of scan parameters (Fig. 2. Paragraph [0032]-HILBERT discloses the system contains: a) a device 201 for acquiring a quantitative map for the biological object, the device being for instance an MRI apparatus 201 configured for acquiring quantitative maps for the biological object, e.g. brain images of a subject); and an image processing module (Fig. 2, called #203 called a processing unit. Paragraph [0032]-HILBERT discloses a processing unit 203 configured for processing the data required for generating the synthetic image, the processing unit 203 being connected to the device 201 for acquiring imaging data and to the database 202; d) a display 204 for displaying the synthetic image, the display 204 being connected to the processing unit 203), comprising: a conversion processor (Fig. 2, called #203 called a processing unit. Paragraph [0032]), configured to perform image conversion on the plurality of quantitative maps on the basis of the plurality of scan parameters to generate a first converted image and a second converted image (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first quantitative proton-density map, and d) the second quantitative proton-density map, as inputs in a contrast synthetization module which contains a physical signal model given by Eq. (2), wherein the inputs are used to generate a synthetic image M with arbitrary TE, TR and TI. The system contains a user interface with a contrast switch enabling to automatically switch between the first contrast component, the second contrast component and the initial contrast component (i.e. no contrast—initial image) when displaying, at step 104, a synthetic image of the biological object. The system is configured for displaying on a display at least two different contrasts at the same time for the biological object. The fat signal or MT-weighting could be turned on and off by switching between M0.sub.P, M0.sub.W, and M0.sub.M when using equation (4) for the physical signal model used to generate the synthetic image M); an image fusion processor, configured to generate a fused image of the first converted image and the second converted image (Fig. 1. Paragraph [0031]-HILBERT discloses at least 3 synthetic images M might be displayed by the system, either at the same time, or by switching from one of the synthetic images to the other one by selecting the appropriate initial magnetization M0.sub.P, M0.sub.W, or M0.sub.M via the contrast switch. Further in paragraph [0034]-HILBERT discloses the obtained maps and images are used as input in a contrast synthetization module 24 of the processing unit 203 (wherein the contrast synthetization module 24 contains a physical signal model (contrast mechanism) as shown in Eq. 4 configured for generating a synthetic image M of the biological object from said inputs). The contrast switch is preferentially configured for enabling a switch between a first synthetic image generated by using the contrast component C.sub.i, with i≥1, and a second synthetic image generated by using the contrast component C.sub.0, in order to switch on/off the corresponding contrast); a second processor, configured to generate a quantitative weighted image on the basis of the fused image (Fig. 1. Paragraph [0030]-HILBERT discloses the system is configured for displaying on a display at least two different contrasts at the same time for the biological object. Further in paragraph [0034]-HILBERT discloses the user interface 205 is further configured for enabling a user to choose the desired synthetic sequence parameters TE, TR, TI. By means of the user interface 25 and its contrast switch, a user may choose to display any of the weighted contrast on a map of the biological object shown then on the display 204 of the system 200, like T2 weighted image, T2 weighted image WE, T1 weighted image, T1 weighted image WE, PD image, PD WE image, or STIR image. A user may switch between different types of preparation contrast by turning the fat signal and a MT-weighting in synthetic contrasts “on” or “off”). HILBERT fails to explicitly teach and a first processor, configured to simultaneously generate a plurality of quantitative maps on the basis of the single raw image; However, XING explicitly teaches and a first processor, configured to simultaneously generate a plurality of quantitative maps on the basis of the single raw image (Fig. 1B. Paragraph [0025]-XING discloses for T.sub.1 mapping, the method uses a deep neural network 108 to derive quantitative T.sub.1 and proton density maps from a single conventional T.sub.1 weighted image 106 acquired in routine clinical practice, as illustrated in FIG. 1B. With the use of the deep neural network, only one T.sub.1 weighted image 106 is required for the generation of a quantitative T.sub.1 map. Further in paragraph [0026]-XING discloses a T.sub.2 map can be produced from a single T.sub.2 or T.sub.2/T.sub.1 weighted image using a trained deep neural network. In this way, qualitative and quantitative MR images can be obtained in the routine clinical practice without changing the imaging protocol or performing multiple scans. In paragraph [0036]-XING discloses generative adversarial networks with various architectures may be used). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT of having a magnetic resonance imaging system, with the teachings of XING of having and a first processor, configured to simultaneously generate a plurality of quantitative maps on the basis of the single raw image. Wherein HILBERT’s system having and a first processor, configured to simultaneously generate a plurality of quantitative maps on the basis of the single raw image. The motivation behind the modification would have been to obtain a system that improves the generation of synthetic quantitative MRI images and the performance of neural networks, since both HILBERT and XING concern processing quantitative MRI images and mappings. Wherein HILBERT systems and methods improves the ability for radiologists to form diagnoses and improves the generation of synthetic images based on quantitative maps by using additional weightings and providing a large variety of contrasts based on short acquisition times on top of the quantitative information, while XING’s systems and methods improves the accuracy and efficiency for generating quantitative maps, requires a single image as the initial input and implements a neural network architecture that achieves a balance between computational workload and performance. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015 and 0025] and XING et al. (US 20210313046 A1), Abstract and paragraph [0030 and 0039-0040]. Regarding claim 12, HILBERT in view of XING explicitly teach the system according to claim 11, HILBERT fail to explicitly teach wherein the first processor is configured to perform deep learning processing on the single raw image on the basis of a first deep learning network to generate the plurality of quantitative maps. However, XING explicitly teaches wherein the first processor (Fig. 1B. Paragraph [0027]-XING discloses the method may be performed by a conventional MRI scanner using standard imaging protocols, adapted with a neural network to generate the quantitative MRI map(s) from the qualitative image acquired by the scanner using conventional clinical imaging techniques. The deep learning network derives quantitative relaxation parametric maps from a single qualitative MR image. The network may be implemented in the MRI scanner or on an external computer Nvidia GPU GeForce GTX1070) is configured to perform deep learning processing on the single raw image on the basis of a first deep learning network (Fig. 1B, #108 called a deep learning network. Paragraph [0025]. In paragraph [0036]-XING discloses generative adversarial networks with various architectures may be used (wherein a generative adversarial network consists of two networks). Please also see FIG. 3 and read paragraph [0037-0040]) to generate the plurality of quantitative maps (Fig. 1B. Paragraph [0025]-XING discloses for T.sub.1 mapping, the method uses a deep neural network 108 to derive quantitative T.sub.1 and proton density maps from a single conventional T.sub.1 weighted image 106 acquired in routine clinical practice, as illustrated in FIG. 1B. With the use of the deep neural network, only one T.sub.1 weighted image 106 is required for the generation of a quantitative T.sub.1 map. Further in paragraph [0026]-XING discloses a T.sub.2 map can be produced from a single T.sub.2 or T.sub.2/T.sub.1 weighted image using a trained deep neural network. Please also see Fig. 4-7). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a magnetic resonance imaging system, with the teachings of XING of having wherein the first processor is configured to perform deep learning processing on the single raw image on the basis of a first deep learning network to generate the plurality of quantitative maps. Wherein HILBERT’s system having wherein the first processor is configured to perform deep learning processing on the single raw image on the basis of a first deep learning network to generate the plurality of quantitative maps. The motivation behind the modification would have been to obtain a system that improves the generation of synthetic quantitative MRI images and the performance of neural networks, since both HILBERT and XING concern processing quantitative MRI images and mappings. Wherein HILBERT systems and methods improves the ability for radiologists to form diagnoses and improves the generation of synthetic images based on quantitative maps by using additional weightings and providing a large variety of contrasts based on short acquisition times on top of the quantitative information, while XING’s systems and methods improves the accuracy and efficiency for generating quantitative maps, requires a single image as the initial input and implements a neural network architecture that achieves a balance between computational workload and performance. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015 and 0025] and XING et al. (US 20210313046 A1), Abstract and paragraph [0030 and 0039-0040]. Regarding claim 15, HILBERT in view of XING explicitly teach the system according to claim 11, HILBERT further teaches wherein the single raw image is obtained by executing a synthesized magnetic resonance scan sequence (Fig. 2. Paragraph [0032]-HILBERT discloses a processing unit 203 configured for processing the data required for generating the synthetic image, the processing unit 203 being connected to the device 201 for acquiring imaging data and to the database 20. The system 200 according to the invention is configured for performing the steps of the previously described method for generating the synthetic image with switchable image contrasts. Please also read paragraph [0030 and 0033]). Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over HILBERT et al. (US 20200333414 A1), hereinafter referenced as HILBERT in view of XING et al. (US 20210313046 A1), hereinafter referenced as XING and in further view of DEY et al. (US 20220107378 A1), hereinafter referenced as DEY. Regarding claim 3, HILBERT in view of XING explicitly teach the method according to claim 1, although HILBERT explicitly teaches wherein the plurality of quantitative images are generated (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first contrast component, which is preferentially the first proton-density image or quantitative proton-density map, e.g. the proton-density image M0.sub.P with fat signal, d) the second contrast component, which is preferentially the second proton-density image or quantitative proton-density map, e.g. the proton-density image with additional magnetization transfer contrast M0.sub.M, and e) the initial contrast component, which is preferentially the proton-density image or quantitative proton-density map. Please also read paragraph [0027-0029 and 0034]). HILBERT fails to explicitly teach wherein the plurality of quantitative images are generated by performing deep learning processing on the fused image on the basis of a second deep learning network. However, DEY explicitly teaches wherein the plurality of images are generated by performing deep learning processing on the fused image on the basis of a second deep learning network (Fig. 5. Paragraph [0133]-DEY discloses the MR data 202 may be generated by synthesizing MR data 202 (wherein further description of synthesizing MR data for training machine learning models is provided in US Patent Publication no. 2020-0294282, which is incorporated by reference). In paragraph [0135]-The MR data 202 and the noise data 204 may be combined to form noise-corrupted MR data 208. Combining the MR data 202 and the noise data 204 may comprise any suitable steps (e.g., multiplying, convolving, or otherwise transforming. In paragraph [0137]-DEY discloses the image reconstruction module 210 may be configured to transform the noise-corrupted MR data 208 in the signal domain into the noise-corrupted image 220 into the image domain. The noise-corrupted image 220 may then be provided to, for example, neural network 110 for denoising. In paragraph [0142]-DEY discloses additional processing may be performed after image reconstruction. Combination module 214 may be configured to combine multiple MR images generated based on data acquired from multiple RF coils of the MRI system or to combine multiple MR images generated based on multiple acquisitions of MR data acquired by the same RF coil (wherein a generative adversarial network may also be used, which contains generator 704 and discriminator 712 neural networks). Please also see Fig. 7B and read paragraph [0154-0157]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a method for generating a magnetic resonance image, with the teachings of DEY of having wherein the plurality of images are generated by performing deep learning processing on the fused image on the basis of a second deep learning network. Wherein HILBERT’s method having wherein the plurality of quantitative images are generated by performing deep learning processing on the fused image on the basis of a second deep learning network. The motivation behind the modification would have been to obtain a method that enhances processing speed and image quality, since HILBERT and DEY concern systems and methods for processing and synthesizing medical images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while DEY provides systems and methods that use machine learning techniques to improve medical imaging technology by more effectively removing or suppressing noise from medical images acquired using medical imaging techniques or devices for which large training datasets are unavailable. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and DEY et al. (US 20220107378 A1), Abstract and Paragraph [0199]. Regarding claim 13, HILBERT in view of XING explicitly teach the system according to claim 11, although HILBERT explicitly teaches generate a plurality of quantitative weighted images (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first contrast component, which is preferentially the first proton-density image or quantitative proton-density map, e.g. the proton-density image M0.sub.P with fat signal, d) the second contrast component, which is preferentially the second proton-density image or quantitative proton-density map, e.g. the proton-density image with additional magnetization transfer contrast M0.sub.M, and e) the initial contrast component, which is preferentially the proton-density image or quantitative proton-density map. Please also read paragraph [0027-0029 and 0034]). HILBERT fails to explicitly teach wherein the second processor performs deep learning processing on the fused image on the basis of a second deep learning network to generate a plurality of quantitative weighted images. However, DEY explicitly teaches wherein the second processor performs deep learning processing on the fused image on the basis of a second deep learning network to generate a plurality of images (Fig. 5. Paragraph [0133]-DEY discloses the MR data 202 may be generated by synthesizing MR data 202 (wherein further description of synthesizing MR data for training machine learning models is provided in US Patent Publication no. 2020-0294282, which is incorporated by reference). In paragraph [0135]-DEY discloses the MR data 202 and the noise data 204 may be combined to form noise-corrupted MR data 208. Combining the MR data 202 and the noise data 204 may comprise any suitable steps (e.g., multiplying, convolving, or otherwise transforming. In paragraph [0137]-DEY discloses the image reconstruction module 210 may be configured to transform the noise-corrupted MR data 208 in the signal domain into the noise-corrupted image 220 into the image domain. The noise-corrupted image 220 may then be provided to, for example, neural network 110 for denoising. In paragraph [0142]-DEY discloses additional processing may be performed after image reconstruction. Combination module 214 may be configured to combine multiple MR images generated based on data acquired from multiple RF coils of the MRI system or to combine multiple MR images generated based on multiple acquisitions of MR data acquired by the same RF coil (wherein a generative adversarial network may also be used, which contains generator 704 and discriminator 712 neural networks). Please also see Fig. 7B and read paragraph [0154-0157]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a system for generating a magnetic resonance image, with the teachings of DEY of having wherein the second processor performs deep learning processing on the fused image on the basis of a second deep learning network to generate a plurality of images. Wherein HILBERT’s method having wherein the second processor performs deep learning processing on the fused image on the basis of a second deep learning network to generate a plurality of quantitative weighted images. The motivation behind the modification would have been to obtain a system that enhances processing speed and image quality, since HILBERT and DEY concern systems and methods for processing and synthesizing medical images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while DEY provides systems and methods that use machine learning techniques to improve medical imaging technology by more effectively removing or suppressing noise from medical images acquired using medical imaging techniques or devices for which large training datasets are unavailable. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and DEY et al. (US 20220107378 A1), Abstract and Paragraph [0199]. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over HILBERT et al. (US 20200333414 A1), hereinafter referenced as HILBERT in view of XING et al. (US 20210313046 A1), hereinafter referenced as XING and further in view of SHI et al. (US 20220207791 A1), hereinafter referenced as SHI. Regarding claim 4, HILBERT in view of XING explicitly teach the method according to claim 1, HILBERT in view of XING fails to explicitly teach wherein the fused image is generated by performing channel concatenation on the first converted image and the second converted image as a preprocessing step prior to input into the second deep learning network. However, SHI explicitly teaches wherein the fused image is generated by performing channel concatenation on the first converted image (Fig. 1, #12a’ called a Primary SPECT patch. Paragraph [0040]) and the second converted image (Fig. 1, #12b’ called a Scatter SPECT patch. Paragraph [0040]) as a preprocessing step prior to input into the second deep learning network (Fig. 1. Paragraph [0040]-SHI discloses a system 100 is disclosed that employs a machine learning system based upon artificial neural networks to estimate attenuation maps for SPECT emission data, wherein the machine learning system includes a generator network 10 and a discriminator network 16 (wherein the generator and discriminator are two deep learning networks). The artificial neural network is in the form of a deep convolutional neural network (CNN) the artificial neural network is in the form of a deep convolutional neural network (CNN) and training of the deep CNN is described. Images reconstructed from photopeak window (126 keV-155 keV) 12a (that is, the primary window) and scatter window (114 keV-126 keV) 12b are concatenated as a multi-channel image and fed into a generator network 10. Please also read paragraph [0042]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a method for generating a magnetic resonance image, with the teachings of SHI of having wherein the fused image is generated by performing channel concatenation on the first converted image and the second converted image as a preprocessing step prior to input into the second deep learning network. Wherein HILBERT’s method having with the teachings of SHI of having wherein the fused image is generated by performing channel concatenation on the first converted image and the second converted image as a preprocessing step prior to input into the second deep learning network. The motivation behind the modification would have been to obtain a method that enhances processing speed and image quality, since both HILBERT and SHI concern systems and methods for processing and generating medical images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while SHI provides systems and methods that allow for the production of realistic attenuation maps with speed and high accuracy. Furthermore, as SHI states in paragraph [0007], deep learning-based approaches have been proposed to estimate images of one modality from another. For example, “initial success was obtained for the task of generating attenuation maps for nuclear images. In “MR-based synthetic CT generation using a deep convolutional neural network method,” convolutional neural networks were used to convert magnetic resonance imaging (MRI) images to attenuation CT images for PET/MRI systems. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and SHI et al. (US 20220207791 A1), Abstract and Paragraph [0003-0007 and 0097-0102]. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over HILBERT et al. (US 20200333414 A1), hereinafter referenced as HILBERT in view of XING et al. (US 20210313046 A1), hereinafter referenced as XING and further in view of WANG et al. (US 20110044524 A1), hereinafter referenced as WANG. Regarding claim 5, HILLBERT in view of XING explicitly teach the method according to claim 1, although HILLBERT explicitly teaches wherein the plurality of quantitative maps comprise a quantitative T1 map, a quantitative T2 map, and a quantitative PD map (Fig. 2. Paragraph [0030]-HILBERT discloses at step 103, the system is configured for using: a) the first quantitative map, e.g. the T1 map, b) the second quantitative map, e.g. the T2 map, c) the first contrast component, which is preferentially the first proton-density image or quantitative proton-density map, e.g. the proton-density image M0.sub.P with fat signal, d) the second contrast component, which is preferentially the second proton-density image or quantitative proton-density map, e.g. the proton-density image with additional magnetization transfer contrast M0.sub.M, and e) the initial contrast component, which is preferentially the proton-density image or quantitative proton-density map), and the plurality of quantitative weighted images comprise a T1 weighted image, a T2 weighted image (Fig. 2. Paragraph [0028]-HILLBERT discloses the user interface 205 is further configured for enabling a user to choose the desired synthetic sequence parameters TE, TR, TI. A user may choose to display any of the weighted contrast on a map of the biological object shown then on the display 204 of the system 200, like T2 weighted image, T2 weighted image WE, T1 weighted image, T1 weighted image WE, PD image, PD WE image, or STIR image. A user may switch between different types of preparation contrast in the example by turning the fat signal and a MT-weighting in synthetic contrasts “on” or “off”. In paragraph [0036]-HILBERT discloses other quantitative parameters than T1 and/or T2 may be acquired (e.g. T2*, multi compartment T2/T1, MT)). HILLBERT in view of XING fail to explicitly teach a T2 weighted-fluid attenuated inversion recovery image. However, WANG explicitly teaches a T2 weighted-fluid attenuated inversion recovery image (Fig. 1. Paragraph [0665]-WANG discloses all subjects will be imaged on a 3T MR scanner. Standard T2-weighted fluid attenuated inversion recovery imaging will also be included. The T2* multiple echo imaging data will generate iron maps, standard T2* magnitude images and their phase masked SWI images for analysis). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a method for generating a magnetic resonance image, comprising: generating a plurality of quantitative maps on the basis of a raw image, the raw image being obtained by executing a magnetic resonance scan sequence, and the magnetic resonance scan sequence having a plurality of scan parameters, with the teachings of WANG of having a T2 weighted-fluid attenuated inversion recovery image. Wherein HILBERT’s method wherein the plurality of quantitative maps comprise a quantitative T1 map, a quantitative T2 map, and a quantitative PD map, and the plurality of quantitative weighted images comprise a T1 weighted image, a T2 weighted image, and a T2 weighted-fluid attenuated inversion recovery image. The motivation behind the modification would have been to obtain a method for generating a magnetic resonance that enhances processing speed and image quality, since both HILBERT and WANG concern systems and methods for processing and generating quantitative magnetic resonance maps and images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while WANG provides systems and methods that improve the performance and precision of measurements in magnetic resonance imaging. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and WANG et al. (US 20110044524 A1), Abstract and Paragraph [0493-0494, 0617, 0834, 0926]. Claim 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over HILBERT et al. (US 20200333414 A1), hereinafter referenced as HILBERT in view of XING et al. (US 20210313046 A1), hereinafter referenced as XING and further in view of AKCAKAYA et al. (US 20210090306 A1), hereinafter referenced as AKCAKAYA. Regarding claim 8, HILBERT in view of XING explicitly teach the method according to claim 1, HILBERT fails to explicitly teach wherein the single raw image comprises at least one of a real image, an imaginary image, and a modular image generated on the basis of the real image and the imaginary image. However, AKCAKAYA explicitly teaches wherein the single raw image comprises at least one of a real image, an imaginary image (Fig. 3. Paragraph [0026]-AKCAKAYA discloses a complex-valued k-space dataset, s, can be embedded into a real-valued space as a dataset of size n.sub.x×n.sub.y×2n.sub.c, where the real part of s is concatenated with the imaginary part of s along the third (channel) dimension), and a modular image generated on the basis of the real image and the imaginary image (Fig. 3. Paragraph [0056]-AKCAKAYA discloses images are reconstructed from undersampled k-space data using a machine learning algorithm implemented with a hardware processor and a memory to estimate missing k-space lines from acquired k-space data with improved noise resilience. Further in paragraph [0056]-AKCAKAYA discloses when applying methods to simultaneous multi-slice applications, a concatenation of multiple ACS slices along the readout direction can be used to transform the reconstruction of SMS/multiband and in-plane accelerated k-space data to a two-dimensional interpolation problem along the phase encoding and the slice-concatenated readout direction. The joint reconstruction of SMS/MB, parallel imaging, and PF can provide additional advantages for achieving high-resolution, full coverage, and short echo times in MRI applications such as diffusion, perfusion, and other quantitative imaging techniques (wherein up to three neural networks or machine learning algorithms may be used: a single neural network or machine learning algorithm performs all three reconstructions together, one neural network performs SMS/MB and parallel imaging while a second network performs PF imaging, or three neural networks perform each stage of the reconstruction process)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a system for generating a magnetic resonance image, with the teachings of AKCAKAYA of having wherein the single raw image comprises at least one of a real image, an imaginary image, and a modular image generated on the basis of the real image and the imaginary image. Wherein HILBERT’s method having wherein the single raw image comprises at least one of a real image, an imaginary image, and a modular image generated on the basis of the real image and the imaginary image. The motivation behind the modification would have been to obtain a system that enhances processing speed and image quality, since HILBERT and AKCAKAYA concern systems and methods for processing and generating quantitative magnetic resonance maps and images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while AKCAKAYA provides systems and methods that improved performance of MRI systems and the noise performance of reconstructions. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and AKCAKAYA et al. (US 20210090306 A1), Abstract and Paragraph [0013, 0022 and 0060-0062]. Regarding claim 14, HILBERT in view of XING explicitly teach the system according to claim 11, HILBERT fails to explicitly teach wherein the image fusion processor is configured to perform channel concatenation on the first converted image and the second converted image to generate the fused image. However, AKCAKAYA explicitly teaches wherein the image fusion processor is configured to perform channel concatenation on the first converted image and the second converted image to generate the fused image (Fig. 1. Paragraph [0026]-AKCAKAYA discloses a complex-valued k-space dataset, s, of size n.sub.x×n.sub.y×n.sub.c, can be embedded into a real-valued space as a dataset of size n.sub.x×n.sub.y×2n.sub.c, where the real part of s is concatenated with the imaginary part of s along the third (channel) dimension). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HILBERT in view of XING of having a system for generating a magnetic resonance image, with the teachings of AKCAKAYA of having wherein the image fusion processor is configured to perform channel concatenation on the first converted image and the second converted image to generate the fused image. Wherein HILBERT’s method having wherein the image fusion processor is configured to perform channel concatenation on the first converted image and the second converted image to generate the fused image. The motivation behind the modification would have been to obtain a system that enhances processing speed and image quality, since HILBERT and XING concern systems and methods for processing and generating quantitative magnetic resonance maps and images. Wherein HILBERT systems and methods provides a large variety of contrasts based on short acquisition times on top of the quantitative information and improves the ability for radiologists to form diagnoses, while AKCAKAYA provides systems and methods that improved performance of MRI systems and the noise performance of reconstructions. Please see HILBERT et al. (US 20200333414 A1), Abstract and Paragraph [0015] and AKCAKAYA et al. (US 20210090306 A1), Abstract and Paragraph [0013, 0022 and 0060-0062]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. HILBERT et al. (US 20190371465 A1)-A system and a method determine a value for a parameter. Reference values for the parameter are determined from a group of objects. A first technique is used by the system for determining for each object the reference value from a first set of data. A learning dataset is created by associating for each object of the group of objects a second set of data and the reference value.......................Please see Fig. 1-2. Abstract. Hilbert et al. (US 20180286088 A1)- The disclosure includes a method for generating quantitative magnetic resonance (MR) images of an object under investigation. A first MR data set of the object under investigation is captured in an undersampled raw data space, wherein the object under investigation is captured in a plurality of 2D slices, in which the resolution in a slice plane of the slices is in each case higher than perpendicular to the slice plane, wherein the plurality of 2D slices are in each case shifted relative to one another by a distance which is smaller than the resolution perpendicular to the slice plane. Further MR raw data points of the first MR data set are reconstructed with the assistance of a model using a cost function which is minimized. The cost function takes account of the shift of the plurality of 2D slices perpendicular to the slice plane........................Please see Fig. 1-2. Abstract. JARA et al. (US 20190365273 A1)- Methods of making a white matter fibrogram representing the connectome of the brain of a subject, comprising: (a) performing a multispectral multislice magnetic resonance scan on the brain of a subject, (b) storing image data indicative of a plurality of magnetic resonance weightings of each of a plurality of slices of the brain of the subject to provide directly acquired images, (c) processing the directly acquired images to generate a plurality of quantitative maps of the brain indicative of a plurality of qMRI parameters of the subject, (d) constructing a plurality of magnetic resonance images indicative of white matter structure from the quantitative maps, and (e) rendering a white matter fibrogram of the brain of the subject from the plurality of magnetic resonance images.......................Please see Fig. 4-7. Abstract. CHEN et al. (US 20220308147 A1)- Systems and methods providing enhancements to quantitative imaging systems and techniques are described herein. In one aspect, a system for tissue quantification in magnetic resonance fingerprinting (MRF) comprises a feature extraction module operable to convert pixel input high-dimensional signal evolution in to a low-dimensional feature map. The system also comprises a spatially constrained quantification module operable to capture spatial information from the low-dimensional feature map and generate an estimated tissue property map..........................Please see Fig. 1. Abstract. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Oct 26, 2022
Application Filed
Mar 06, 2025
Non-Final Rejection — §103
Jun 12, 2025
Response Filed
Jul 01, 2025
Final Rejection — §103
Sep 05, 2025
Response after Non-Final Action
Oct 02, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Nov 14, 2025
Non-Final Rejection — §103
Feb 19, 2026
Response Filed
Apr 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555249
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR SUPPORTING VIRTUAL GOLF SIMULATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548171
INFORMATION PROCESSING APPARATUS, METHOD AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541822
METHOD AND APPARATUS OF PROCESSING IMAGE, COMPUTING DEVICE, AND MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12505503
IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 23, 2025
Patent 12482106
METHOD AND ELECTRONIC DEVICE FOR SEGMENTING OBJECTS IN SCENE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month