Prosecution Insights
Last updated: April 19, 2026
Application No. 17/978,264

METHOD FOR TRAINING POST-PROCESSING DEVICE FOR DENOISING MRI IMAGE AND COMPUTING DEVICE FOR THE SAME

Final Rejection §101§103
Filed
Nov 01, 2022
Examiner
SHERALI, ISHRAT I
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Seoul National University R&Db Foundation
OA Round
2 (Final)
93%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
710 granted / 761 resolved
+31.3% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
11 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
30.1%
-9.9% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 761 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/Arguments This action is in response to the Applicant’s amendment and arguments dated 10/17/2025. The Applicant’s amendments/arguments are fully considered regarding rejection under 35 USC 101 however they are not persuasive with respect to the rejection. Please see the updated 35 USC 101 rejection and remarks section. The Applicant’s amendments/arguments are fully considered regarding rejection under 35 USC 103 however they are not persuasive with respect to the art rejection and moot due to new grounds of rejection. Applicant's amendment necessitated the new ground(s) of rejection. Please see the updated 35 USC 103 rejection Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6 and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process using image data of two groups (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc.). This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such. According to the USPTO guidelines, a claim 1, 5-6 and 14 are directed to non-statutory subject matter. The following analysis is based on the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) published on January 7, 2019 (84 Fed. Reg. 50). See also MPEP 2106.04(a)(2)(II). Regarding independent claims 1 and 14: Step 1: Claims 1 and 14 meet step 1 requirement as they are directed towards a process, machine, manufacture or composition of matter which is/are statutory subject matter. In this case, “A magnetic resonance imaging (MRI) system” and “A neural network training method” satisfy a machine and a process. Step 2A, prong 1 test: Do the independent claims 1 and 14 recite an abstract idea, mathematical concepts, mental process Claims 1 and 14 are analyzed to determine whether claims 1 and 14 are directed to any judicial exception. Claims 1 and 14 recites: A magnetic resonance imaging (MRI) system comprising (claim 1): an MRI scanner including a first group of coils and a second group of coils (generic/conventional computing and MRI used for data collection which is extra solution activity); and a computing device including a post-processing part for post-processing an MRI image and a training management part (generic/conventional computing device used for data collection which is extra solution activity and training/learning mental process using person intelligence), wherein a first image generated based on signals obtained from the first group of coils is used as training input data for supervised learning of the post-processing part (Data collection which is extra solution activity and person mentally training using mental intelligence), a second image generated based on signals obtained from the second group of coils is used as a label for supervised learning of the post-processing part (Data collection which is extra solution activity and labelling the collected data using mental process and mentally learning/training using person intelligence), and the training management part is configured to perform supervised learning on the post-processing part using the training input data and the label (mental process of training human intelligence based on the two groups of data using person intelligence). the first image and second image group are obtained through same one time data acquisition performed MRI (using generic MRI system to collect imaging data which extra solution activity and dividing the collected imaging data using mental process). A neural network training method for training a post-processing part configured to receive an input of a magnetic resonance imaging (MRI) image and denoise the MRI image, the method comprising (claim 14): generating, by an MRI scanner including a first group of coils and a second group of coils, a first image based on signals obtained from the first group of coils (Data collection activity which is extra solution activity obtained from generic/conventional MRI device), generating, by the MRI scanner, a second image based on signals obtained from the second group of coils ( Data collection activity which is extra solution activity obtained from generic/conventional MRI device), performing, by a computing device, supervised learning on the post- processing part by using the first image as training input data for supervised learning of the post-processing part and using the second image as a label for supervised learning of the post-processing part (mental process of labelling of the data and training human intelligence based on the two groups of data). the first image and second image group are obtained through same one time data acquisition performed MRI (using generic MRI system to collect imaging data which extra solution activity and dividing the collected imaging data using mental process). The limitations of the independent claims 1 and 14 as drafted, is a process that, under its broadest reasonable interpretation, cover mental process using collected data from two groups (extra solution activity) and processing two groups of data mentally using human intelligence and training/learning human mind (concept performed in a human mind using collected data, including as observation, evaluation, judgment, opinion, prediction, etc.). Therefore, other than generic computer hardware and generic MRI device recited in independent claims 1 and 14, the limitations cover mental process using collected data from two groups of data (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc. Furthermore limitations recited in claims 1 and 14 as shown above are insignificant. The Examiner notes that under MPEP 2106.04(A) (2) (III), the courts consider a mental process (thinking, human intelligence) that can be performed in the mind/intelligence using a paper and pencil to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[Mental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978). Other than generic and well-known computer hardware and generic/ conventional MRI system recited in claims 1 and 14, nothing in the claims 1 and 14 elements preclude the processing from being performed as mental process, or merely based on the observations, judgement, thought process using paper/pencil. The training/ learning and neural network recited in the independent claims 1 and 14 are/is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ) (Step 2A, prong 1 Test Abstract idea = Yes) Step 2A, prong 2 test: Claims 1 and 14 are then analyzed if it requires an additional elements or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limitation on the judicial exception, such that the claims are more than a drafting effort designed to monopolize the exception – i.e., limitation that are indicative of integration into a practical application: improving to the functioning of a computer or to any other technology or technical field. As noted above independent claims 1 and 14 as drafted, is a process that, under its broadest reasonable interpretation, cover mental process using collected two groups of data (collection of two groups of data which is extra solution activity) and concept performed in a human mind using collected data, including as observation, evaluation, judgment, opinion, prediction, and human learning/training using human intelligence etc. As stated above the limitations of the independent claims 1 and 14 as drafted, is a process that, under its broadest reasonable interpretation, cover mental process using collected data from two groups (extra solution activity) and processing two groups of data mentally using human intelligence and training/learning human mind (concept performed in a human mind using collected data, including as observation, evaluation, judgment, opinion, prediction, etc.). Therefore, other than generic computer hardware and generic MRI device recited in independent claims 1 and 14, the limitations cover mental process using collected data from two groups of data (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc. Furthermore limitations recited in claims 1 and 14 as shown above are insignificant Other than generic and well-known computer hardware and known and conventional MRI system recited in claims 1 and 14, nothing in the claims 1 and 14 elements preclude the processing from being performed as mental process, or merely based on the observations, judgement, thought process using paper/pencil and human thought process. The training/ learning or neural network recited in the independent claims 1 and 14 are/is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h) The limitations of claims 1` and 14 are recited at a high-level generality as a general action or calculation being taken based on the acquiring step and amount to mere post solution actions, which is form of insignificant extra solution activity without any further detail. Furthermore the claims are recited generically and operating in ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly even in the combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits practicing the abstract idea (Step 2A: Prong Two Abstract Idea=Yes). Step 2B: Do the claims 1 and 14 recite additional elements that amount to significantly than judicial exception?. Other than generic and well-known computer hardware and known and conventional MRI system recited in claims 1 and 14, nothing in the claims 1 and 14 elements preclude the processing from being performed as mental process, or merely based on the observations, judgement, thought process using paper/pencil and human thought process. The training/ learning or neural network recited in the independent claims 1 and 14 are/is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h). The limitations of claims 1` and 14 are recited at a high-level generality as a general action or calculation being taken based on the acquiring step and amount to mere post solution actions, which is form of insignificant extra solution activity without any further detail. Furthermore the claims are recited generically and operating in ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly even in the combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits practicing the abstract idea Consequently, the identified additional element taken into consideration individually or in combination of the steps performed fails to amount to significantly more than the abstract idea above (Step 2B: Abstract Idea=Yes). Regarding claim 6, claim recites “while performing the supervised learning, the post-processing part is configured to receive an input of the first image to generate a post-processed image, and the training management part is configured to train the post-training part using a loss function between the post-processed image and the second image” . The claim 6 further limit the abstract idea of performance of the limitations in the mind based on mental process of observations, judgement, thought process using human thought process and intelligence using collected data and solving mathematic problem of loss/difference, there are no additional elements in the claim that would integrate the abstract idea into a practical application. The claim 6 does not mention any improvement to a computer or to any other technology or technical field. The limitations of claim 6 fail to add inventive concept to otherwise mental process. Therefore claim 6 is no more than abstract idea without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20200300954) in view of LI (WO 2020198959/ PTO-892 dated 7/30/205) Regarding claim 1, Sharma discloses magnetic resonance imaging (MRI) system comprising (Sharma, FIG. 10 shows magnetic resonance imaging (MRI) system, paragraphs 0101-0102): an MRI scanner including a first group of coils and a second group of coils (Sharma Figs 1A-1B and 4 multi slice MRI images and Figs 2-3 shows multi slice, multi coil sensitivity map block 204 and paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils), a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number. This obviously corresponds to an MRI scanner including a first group of coils and a second group of coil and these two sets of coils can obviously be separate sets of coils. Because it is well known in the art such as shown by LI (WO 2020198959/ PTO-892) shows first and second groups of coils and reference [target data or ground truth/label] as one group of coils among plurality of group of coils in paragraph 0007. Therefore it would be obvious in the system of Sharma to select one group of coils target/reference images. a computing device including a post-processing part for post-processing an MRI image and a training management part (Sharma Figs. 5A-5B, paragraph 0055 thru 0058 disclose FIGS. 5A and 5B show non-limiting examples of a method 500 that train and use a DL neural network 561 to aid in performing SMS MRI image reconstruction. Method 500, as illustrated in FIG. 5A, uses a DL-ANN (DL network 561) to learn how to correct errors in the original coil sensitivity 505 (S.sub.i) to generate a corrected coil sensitivity (Ŝ.sub.i). Method 500 includes two parts: (i) an offline training process 550 and (ii) an MRI imaging process. Process 550 trains the DL ANN 561, and process 502 uses the trained network 561 to correct the coil sensitivity 505, ultimately generating high-quality MRI images 535 in which the artifacts have been mitigated. This obviously corresponds to a computing device including a post-processing part for post-processing an MRI image and a training management part), a first image generated based on signals obtained from the first group of coils is used as training input data for supervised learning of the post-processing part (Sharma Fig 5A, Fig. 5A shows supervised training, paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils), a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number. Sharma in Fig. 5A and paragraph 0063 obviously disclose the first a first image [input data 557] generated based on signals obtained from the first group of coils is used as training input data for supervised learning of the post-processing part), a second image generated based on signals obtained from the second group of coils is used as a label for supervised learning of the post-processing part (Sharma Fig 5A, paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils i.e. [input data block 557] ) , a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i [i.e. block 55 target data]) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number and Sharma paragraph 0077 disclose an error is calculated (e.g., using a loss function or a cost function) to represent a measure of the difference (e.g., a distance measure) between the target images 55 i.e., ground truth [which is obviously label images] and input images 557 after applying a current version of the network 563. The error can be calculated using any known cost function or distance measure, including those cost functions described above. Further, in certain implementations the error/loss function can be calculated using one or more of a hinge loss and a cross-entropy loss. All this obviously corresponds to a second image generated based on signals obtained from the second group of coils is used as a label for supervised learning of the post-processing part ), and the training management part is configured to perform supervised learning on the post-processing part using the training input data and the label (Sharma Fig. 5A and paragraph 0077 disclose an error is calculated (e.g., using a loss function or a cost function) to represent a measure of the difference (e.g., a distance measure) between the target images 55 i.e., ground truth [which is obviously label images] and input images 557 after applying a current version of the network 563. The error can be calculated using any known cost function or distance measure, including those cost functions described above. Further, in certain implementations the error/loss function can be calculated using one or more of a hinge loss and a cross-entropy loss and Sharma paragraph 0079 and Fig. 7A disclose the network 563 is trained using backpropagation. Backpropagation can be used for training neural networks and is used in conjunction with gradient descent optimization methods. During a forward pass, the algorithm computes the network's predictions based on the current parameters θ. These predictions are then input into the loss function, by which they are compared to the corresponding ground truth labels (i.e., the target image 553). During the backward pass, the model computes the gradient of the loss function with respect to the current parameters, after which the parameters are updated by taking a step of a predefined size in the direction of minimized loss (e.g., in accelerated methods, such that the Nesterov momentum method and various adaptive methods, the step size can be selected to more quickly converge to optimize the loss function. All this obviously corresponds to the training management part is configured to perform supervised learning on the post-processing part using the training input data and the label). Sharma ha not explicitly disclose the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner. In the same field endeavor of MRI scanner and processing images LI disclose MRI scanner with first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning (LI ABSTRACT, Fis. 5-7, show MR imaging data and plurality group of imaging data, paragraph 0007 disclose determine one or more correction coefficients, the processor directed to cause the system to perform operation including determining one group of plurality of groups of imaging data as reference data (label), determining phase difference data between the each group of the plurality of group of imaging data and the reference imaging data (label data) and based on the phase difference data, the phase correction and paragraph 0008 disclose In some embodiments, to determine one group of the plurality of groups of imaging data as reference imaging data, the at least one processor may be directed to cause the system to perform additional operations including: identifying the one group of imaging data that corresponds to a maximum magnitude among the plurality of groups of imaging data as the reference imaging data (label data). It is obvious in the system of LI that using MRI scanner LI is obtaining imaging data by first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning i.e. label data Furthermore LI disclose obtaining plurality group of imagining data and it obvious in the system LI that first and second group of images are obtained through a same one-time data acquisition and LI has not shown obtaining plurality of group imaging data in different times. . .Therefore it would have been obvious to one having ordinary skill in the art before filing data of the claimed to use MRI scanner with first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning as shown by LI in the system of Sharma because such a system and process provide leaning to mitigate or eliminate artifacts and noise. Regarding claim 5 Sharma disclose the first image is a first MRI image generated based on a first group of MRI signals obtained from the first group of coils, generating of the second image includes generating a second MRI image based on a second group of MRI signals obtained from the second group of coils (Sharma Figs 1A-1B and 4 multi slice MRI images and Figs 2-3 shows multi slice, multi coil sensitivity map block 204 and paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils), a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number) and generating a label image based on the second MRI image so as to compensate for a difference in sensitivity between the first group of coils and the second group of coils, the second image is the generated label image ( Fig. 5A, Fig. 5A shows supervised training and paragraph 0077 disclose an error is calculated (e.g., using a loss function or a cost function) to represent a measure of the difference (e.g., a distance measure) between the target images 55 i.e., ground truth [which is obviously label images] and input images 557 after applying a current version of the network 563 and paragraph 0062 disclose target data does not include the artifacts commonly found in the SMS slice images ŷ.sub.i in which the coil sensitivity Ŝ includes errors, training the network 561 to produce images matching the target images y.sub.i will reduce the errors in the corrected coil sensitivity Ŝ generated using the network 561. The effect of the network 561 is to distortion match the Rx maps, such that, when the corrected coil sensitivity Ŝ.sub.i is applied to the aliased SMS images x.sub.i using SENSE, the separated SMS slice images ŷ.sub.i are generated without leakage artifacts. All this obviously corresponds to the first image is a first MRI image generated based on a first group of MRI signals obtained from the first group of coils, generating of the second image includes generating a second MRI image based on a second group of MRI signals obtained from the second group of coils and generating a label image based on the second MRI image so as to compensate for a difference in sensitivity between the first group of coils and the second group of coils, the second image is the generated label image). Regarding claim 6 Sharma disclose while performing the supervised learning, the post-processing part is configured to receive an input of the first image to generate a post-processed image and the training management part is configured to train the post-training part using a loss function between the post-processed image and the second image (Sharma Fig. 5A, Fig. 7A shows supervised training and paragraph 0077 disclose an error is calculated (e.g., using a loss function or a cost function) to represent a measure of the difference (e.g., a distance measure) between the target images 55 i.e., ground truth [which is obviously label images] and input images 557 after applying a current version of the network 563. The error can be calculated using any known cost function or distance measure, including those cost functions described above. Further, in certain implementations the error/loss function can be calculated using one or more of a hinge loss and a cross-entropy loss and Sharma paragraph 0079 and Fig. 7A disclose the network 563 is trained using backpropagation. Backpropagation can be used for training neural networks and is used in conjunction with gradient descent optimization methods. This corresponds to while performing the supervised learning, the post-processing part is configured to receive an input of the first image to generate a post-processed image and the training management part is configured to train the post-training part using a loss function between the post-processed image and the second image). Regarding claim 14, Sharma disclose a neural network training method for training a post-processing part configured to receive an input of a magnetic resonance imaging (MRI) image and denoise the MRI image, the method comprising (Sharma Fig 10 shows Magnetic resonance imaging system and Figs. 5A thru 7B and paragraph 0059 and paragraph 0062-0063a neural network training method for training a post-processing part configured to receive an input of a magnetic resonance imaging (MRI) image and denoise the MRI image): generating, by an MRI scanner including a first group of coils and a second group of coils, a first image based on signals obtained from the first group of coils (Sharma Figs 1A-1B and 4 multi slice MRI images and Figs 2-3 shows multi slice, multi coil sensitivity map block 204 and paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils), a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number. This obviously corresponds to generating, by an MRI scanner including a first group of coils and a second group of coils, a first image based on signals obtained from the first group of coils and these two sets of coils can obviously be separate sets of coils. Because it is well known in the art such as shown by LI (WO 2020198959/PTO-892) shows first and second groups of coils and reference [target data or ground truth/label] as one group of coils among plurality of group of coils in paragraph 0007. Therefore it would be obvious in the system of Sharma to select one group of coils as target/reference images. generating, by the MRI scanner, a second image based on signals obtained from the second group of coils (Sharma Figs 1A-1B and 4 multi slice MRI images and Figs 2-3 shows multi slice, multi coil sensitivity map block 204 and paragraph 0063 disclose the network 561 is trained using a large dataset of training pairs. Each training pair will include a set of input images (e.g., aliased SMS images x.sub.i from the N coils), a corresponding set of target images (e.g., single-slice, coil-combined images acquired without SMS y.sub.i) and the coil sensitivity maps S.sub.i. The entire set of training data will include M training pairs, wherein M is preferably a large number. This obviously corresponds to generating, by an MRI scanner including a second group of coils and a second group of coils, a second image based on signals obtained from the second group of coils), performing, by a computing device, supervised learning on the post- processing part by using the first image as training input data for supervised learning of the post-processing part and using the second image as a label for supervised learning of the post-processing part (Sharma Fig. 5A, Fig. 7A shows supervised training and paragraph 0077 disclose an error is calculated (e.g., using a loss function or a cost function) to represent a measure of the difference (e.g., a distance measure) between the target images 55 i.e., ground truth [which is obviously label images] and input images 557 after applying a current version of the network 563. The error can be calculated using any known cost function or distance measure, including those cost functions described above. Further, in certain implementations the error/loss function can be calculated using one or more of a hinge loss and a cross-entropy loss and Sharma paragraph 0079 and Fig. 7A disclose the network 563 is trained using backpropagation. Backpropagation can be used for training neural networks and is used in conjunction with gradient descent optimization methods. During a forward pass, the algorithm computes the network's predictions based on the current parameters θ. These predictions are then input into the loss function, by which they are compared to the corresponding ground truth labels (i.e., the target image 553). During the backward pass, the model computes the gradient of the loss function with respect to the current parameters, after which the parameters are updated by taking a step of a predefined size in the direction of minimized loss (e.g., in accelerated methods, such that the Nesterov momentum method and various adaptive methods, the step size can be selected to more quickly converge to optimize the loss function. All this obviously correspond s to performing, by a computing device, supervised learning on the post- processing part by using the first image as training input data for supervised learning of the post-processing part and using the second image as a label for supervised learning of the post-processing part). Sharma ha not explicitly disclose the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner. In the same field endeavor of MRI scanner and processing images LI disclose MRI scanner with first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning (LI ABSTRACT, Fis. 5-7, show MR imaging data and plurality group of imaging data, paragraph 0007 disclose determine one or more correction coefficients, the processor directed to cause the system to perform operation including determining one group of plurality of groups of imaging data as reference data (label), determining phase difference data between the each group of the plurality of group of imaging data and the reference imaging data (label data) and based on the phase difference data, the phase correction and paragraph 0008 disclose In some embodiments, to determine one group of the plurality of groups of imaging data as reference imaging data, the at least one processor may be directed to cause the system to perform additional operations including: identifying the one group of imaging data that corresponds to a maximum magnitude among the plurality of groups of imaging data as the reference imaging data (label data). It is obvious in the system of LI that using MRI scanner LI is obtaining imaging data by first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning i.e. label data Furthermore LI disclose obtaining plurality group of imagining data and it obvious in the system LI that first and second group of images are obtained through a same one-time data acquisition and LI has not shown obtaining plurality of group imaging data in different times. . .Therefore it would have been obvious to one having ordinary skill in the art before filing data of the claimed to use MRI scanner with first group of coils and second group of coils and the first and second image are obtained through a same one-time data acquisition process performed by MRI scanner and first group of coils is used training input data and the second group of coils is used as label for learning as shown by LI in the system of Sharma because such a system and process provide leaning to mitigate or eliminate artifacts and noise. Remarks Applicant argued the following in the Applicant’s amendment/ response filed on 10/17/2025. Claims are not directed to an abstract idea, and even if they were, they are integrated into practical application that improve a specific technological field. Therefore amended limitations of claims 1 and 14 satisfies the requirements of 35 USC 101. The Examiner disagrees with the Applicant’s argument. Claims 1 and 14 recite the following: Claims 1 and 14 are analyzed to determine whether claims 1 and 14 are directed to any judicial exception. Claims 1 and 14 recites: A magnetic resonance imaging (MRI) system comprising (claim 1): an MRI scanner including a first group of coils and a second group of coils (generic/conventional computing and MRI used for data collection which is extra solution activity); and a computing device including a post-processing part for post-processing an MRI image and a training management part (generic/conventional computing device used for data collection which is extra solution activity and training/learning mental process using person intelligence), wherein a first image generated based on signals obtained from the first group of coils is used as training input data for supervised learning of the post-processing part (Data collection which is extra solution activity and person mentally training using mental intelligence), a second image generated based on signals obtained from the second group of coils is used as a label for supervised learning of the post-processing part (Data collection which is extra solution activity and labelling the collected data using mental process and mentally learning/training using person intelligence), and the training management part is configured to perform supervised learning on the post-processing part using the training input data and the label (mental process of training human intelligence based on the two groups of data using person intelligence). the first image and second image group are obtained through same one time data acquisition performed MRI (using generic MRI system to collect imaging data which extra solution activity and dividing the collected imaging data using mental process). A neural network training method for training a post-processing part configured to receive an input of a magnetic resonance imaging (MRI) image and denoise the MRI image, the method comprising (claim 14): generating, by an MRI scanner including a first group of coils and a second group of coils, a first image based on signals obtained from the first group of coils (Data collection activity which is extra solution activity obtained from generic/conventional MRI device), generating, by the MRI scanner, a second image based on signals obtained from the second group of coils ( Data collection activity which is extra solution activity obtained from generic/conventional MRI device), performing, by a computing device, supervised learning on the post- processing part by using the first image as training input data for supervised learning of the post-processing part and using the second image as a label for supervised learning of the post-processing part (mental process of labelling of the data and training human intelligence based on the two groups of data). the first image and second image group are obtained through same one time data acquisition performed MRI (using generic MRI system to collect imaging data which extra solution activity and dividing the collected imaging data using mental process). The limitations of the independent claims 1 and 14 as drafted, is a process that, under its broadest reasonable interpretation, cover mental process using collected data from two groups (extra solution activity) and processing two groups of data mentally using human intelligence and training/learning human mind (concept performed in a human mind using collected data, including as observation, evaluation, judgment, opinion, prediction, etc.). Therefore, other than generic computer hardware and generic MRI device recited in independent claims 1 and 14, the limitations cover mental process using collected data from two groups of data (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc. Furthermore limitations recited in claims 1 and 14 as shown above are insignificant. The Examiner notes that under MPEP 2106.04(A) (2) (III), the courts consider a mental process (thinking, human intelligence) that can be performed in the mind/intelligence using a paper and pencil to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[Mental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978). Other than generic and well-known computer hardware and generic/ conventional MRI system recited in claims 1 and 14, nothing in the claims 1 and 14 elements preclude the processing from being performed as mental process, or merely based on the observations, judgement, thought process using paper/pencil. The training/ learning and neural network recited in the independent claims 1 and 14 are/is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ). Claims 1 and 14 are analyzed if it requires an additional elements or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limitation on the judicial exception, such that the claims are more than a drafting effort designed to monopolize the exception – i.e., limitation that are indicative of integration into a practical application: improving to the functioning of a computer or to any other technology or technical field. The limitations of claims 1` and 14 are recited at a high-level generality as a general action or calculation being taken based on the acquiring step (two group of imaging data acquired at the same time) and amount to mere post solution actions, which is form of insignificant extra solution activity without any further detail. Furthermore the claims are recited generically and operating in ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly even in the combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits practicing the abstract idea. Consequently, the identified additional element taken into consideration individually or in combination of the steps performed fails to amount to significantly more than the abstract idea above . Allowable Subject Matter Claims 2-4 are objected as being dependent on rejected base claim but would be allowable if rewritten in the independent form including limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHRAT I SHERALI whose telephone number is (571)272-7398. The examiner can normally be reached Monday-Friday 8:00AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ISHRAT I. SHERALI Examiner Art Unit 2667 /ISHRAT I SHERALI/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Mar 19, 2024
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §101, §103
Oct 17, 2025
Response Filed
Jan 10, 2026
Final Rejection — §101, §103
Apr 07, 2026
Request for Continued Examination
Apr 12, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592150
METHOD FOR WARNING COLLISION OF VEHICLE, SYSTEM, VEHICLE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586209
MECHANISM CAPABLE OF DETECTING MOTIONS OF DIFFERENT SURFACE TEXTURES WITHOUT NEEDING TO PERFORM OBJECT IDENTIFICATION OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579820
LEARNING APPARATUS AND LEARNING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12548308
METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA
2y 5m to grant Granted Feb 10, 2026
Patent 12542874
Methods and Systems for Person Detection in a Video Feed
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+5.8%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 761 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month