DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/06/2024 and 07/15/2025 is/are compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Office Action Summary
Claim(s) 1, 5-6, 8, and 14-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Schlemper et al (US 2020/0033431 A1).
Claim(s) 2-4, 7, and 9-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schlemper et al (US 2020/0033431 A1) in view of Yaman et al (Self-Supervised Learning of Physics-Guided Reconstruction Neural Networks without Fully-Sampled Reference Data).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 5-6, 8, and 14-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Schlemper et al (US 2020/0033431 A1).
Regarding claim(s) 1, Schlemper teaches a method comprising training, by one or more processors coupled to a non-transitory memory (Figure 19; and Paragraph [0256]), a machine-learning model that receives magnetic resonance (MR) data and generates a reconstruction of the MR data (Figure 1; Figure 3; and Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”), the machine-learning model trained based on a set of losses comprising a first loss value corresponding to a frequency-domain and a second loss value corresponding to an image-based domain (Figure 9B; Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function. In some embodiments, the spatial frequency domain loss function includes mean-squared error”; and Paragraph [0158]: “the loss function includes a first loss function to capture error in the spatial frequency domain and a second loss function to capture error in the image domain [...] The first and second measures of error may be combined (e.g., via a weighted combination) to produce an overall measure of error, which is to be minimized during the training process”).
Regarding claim(s) 5, Schlemper teaches the method of claim 1, wherein the MR data is non-Cartesian MR spatial frequency data captured using an MR system (Paragraph [0058]: “deep learning techniques for generating high-quality MR images from under-sampled spatial frequency data that: (1) operate both in the spatial frequency domain and in the image domain; and (2) enable reconstruction of MR images from non-Cartesian sampling trajectories”).
Regarding claim(s) 6, Schlemper teaches the method of claim 1, wherein the first loss value is calculated based on (1) a first output of the machine-learning model generated using a first subset of input MR data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function”), and (2) a second output of the machine-learning model generated using the input MR data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data”), and wherein the second loss value is calculated based on a subset of a transformation of the first output and a corresponding second subset of the input MR data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function”).
Regarding claim(s) 8, Schlemper teaches the method of claim 1, further comprising receiving patient MR data and feeding the patient MR data to the machine-learning model to obtain a reconstructed image based on the patient MR data (Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”; Paragraph [0069]: “multiple deep-learning techniques for reconstructing MR images from data obtained using non-Cartesian sampling trajectories”; and Paragraph [0254]: “portable MRI system 3900 that has been transported to a patient's bedside to perform a scan of the patient's knee”).
Regarding claim(s) 14, Schlemper teaches a system, comprising:
a magnetic resonance (MR) imaging system configured to generate MR spatial frequency data (Abstract: “a magnetic resonance imaging (MRI) system, configured to detect magnetic resonance (MR) signals and control the magnetics system acquire MR spatial frequency data”; and Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”); and
one or more processors (Figure 19; and Paragraph [0256]) configured to:
cause the MR imaging system to generate the MR spatial frequency data based on a non-Cartesian sampling pattern (Figure 14; and Paragraph [0058]: “deep learning techniques for generating high-quality MR images from under-sampled spatial frequency data that: (1) operate both in the spatial frequency domain and in the image domain; and (2) enable reconstruction of MR images from non-Cartesian sampling trajectories”); and
execute a machine-learning model to generate an MR image based on the MR spatial frequency data (Figure 1; Figure 3; and Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”), the machine-learning model trained based on a first loss value corresponding to a frequency-domain and a second loss value corresponding to an image-based domain (Figure 9B; Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function. In some embodiments, the spatial frequency domain loss function includes mean-squared error”; and Paragraph [0158]: “the loss function includes a first loss function to capture error in the spatial frequency domain and a second loss function to capture error in the image domain [...] The first and second measures of error may be combined (e.g., via a weighted combination) to produce an overall measure of error, which is to be minimized during the training process”).
Regarding claim(s) 15, Schlemper teaches the system of claim 14, wherein the first loss value is calculated based on (1) a first output of the machine-learning model generated using a first subset of MR training data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function”), and (2) a second output of the machine-learning model generated using the MR training data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data”), and wherein the second loss value is calculated based on a subset of a transformation of the first output and a corresponding second subset of the MR training data (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-4, 7, and 9-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schlemper et al (US 2020/0033431 A1) in view of Yaman et al (Self-Supervised Learning of Physics-Guided Reconstruction Neural Networks without Fully-Sampled Reference Data).
Regarding claim(s) 2, Schlemper teaches the method of claim 1, wherein the set of losses comprises a (Paragraph [0011]: “the first neural network block is configured to perform data consistency processing using a non-uniform Fourier transformation for transforming image domain data to spatial frequency domain data”; and Paragraph [0158]: “the loss function includes a first loss function to capture error in the spatial frequency domain and a second loss function to capture error in the image domain [...] The first and second measures of error may be combined (e.g., via a weighted combination) to produce an overall measure of error, which is to be minimized during the training process”).
Schlemper fails to teach partition data consistency (PDC) loss. However, Yaman teaches partition data consistency (PDC) loss (Page 13, Last Paragraph – Page 14, 1st Paragraph: “the training database is partitioned into two sets of complementary datasets […] In our approach, we do a similar partitioning of the acquired data to two sets we denoted Θ and Λ […] the intuition for partitioning within the network is similar, as the unrolled network only sees Θ for data consistency during training, while Λ is only used to establish the network loss”).
Schlemper teaches a machine-learning reconstruction model trained using a spatial frequency-domain data consistency loss and an image-domain loss. Additionally, Yaman teaches partitioning acquired k-space samples into disjoint subsets (Θ and Λ) and computing training loss using one subset while enforcing data consistency using the other.
Therefore, it would have been obvious to one of ordinary skill in the art to combine Schlemper’s frequency-domain data consistency loss to operate on partitioned subsets of training data as taught by Yaman in order to eliminate the need for fully sampled reference data, a well-known objective in accelerated MRI reconstruction (Yaman, Page 12-16) before the effective filing date of the claimed invention. The combination yields a partition-based data consistency loss operating in the frequency domain and an image-domain appearance consistency loss as recited. This motivation for the combination of Schlemper and Yaman is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 3, Schlemper teaches the method of claim 1, but do not specifically teach wherein the machine-learning model is trained based on two subsets of training MR data, each subset generated by applying a sampling function to a set of locations of the training data.
However, Yaman teaches wherein the machine-learning model is trained based on two subsets of training MR data, each subset generated by applying a sampling function to a set of locations of the training data (Page 3, 2nd Paragraph: “The proposed self-supervised approach which we term as Self-Supervision via Data Undersampling (SSDU) splits the acquired k-space indices into two disjoint sets. One of these is used in the data consistency unit for the network, while the other set is used to define the loss function in k-space”; and Page 13, Last Paragraph – Page 14, 1st Paragraph: “the training database is partitioned into two sets of complementary datasets […] In our approach, we do a similar partitioning of the acquired data to two sets we denoted Θ and Λ […] the intuition for partitioning within the network is similar, as the unrolled network only sees Θ for data consistency during training, while Λ is only used to establish the network loss”).
Schlemper teaches training a neural network for MR image reconstruction using spatial frequency-domain input data and minimizing a loss function during training. Additionally, Yaman teaches partitioning acquired k-space sample locations Ω into two disjoint subsets Θ and Λ, where the subsets are derived from k-space sampling locations and are used separately during training.
Therefore, it would have been obvious to a person of ordinary skill in the art to apply Yaman’s subset-based sampling strategy to the training framework of Schlemper in order to train the machine-learning model based on two subsets of MR training data generated from sampling locations, as partition-based sampling for training reconstruction networks is a known and predictable modification in the field of accelerated MRI reconstruction (Yaman, Page 12-16). This motivation for the combination of Schlemper and Yaman is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 4, Schlemper as modified by Yaman teaches the method of claim 3, where Schlemper teaches wherein the machine-learning model is further trained by feeding the two subsets into a variational network to obtain two predicted subsets (Figure 13A – 13D; Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”; and Paragraph [0209]: “process 1400 may be performed using a non-uniform variational network (e.g., the neural network described with reference to FIGS. 13A-D), a generalized non-uniform variation network (e.g., the neural network described with reference to FIGS. 13A, 13D, and 13E), or any other suitable type of neural network model.”), and where Yaman teaches wherein at least one of the losses in the set of losses is based on the two subsets and the two predicted subsets (Equation (10); Page 13, Last Paragraph – Page 14, 1st Paragraph: “the training database is partitioned into two sets of complementary datasets […] In our approach, we do a similar partitioning of the acquired data to two sets we denoted Θ and Λ […] the intuition for partitioning within the network is similar, as the unrolled network only sees Θ for data consistency during training, while Λ is only used to establish the network loss”; Page 5, last Paragraph: “the unrolled network output image […] data consistency is transformed to k-space using the encoding operator […] Then the loss is calculated in k-space with respect to the acquired k-space data at these locations”; and Page 7, 1st Paragraph: “for the proposed self-supervised training these correspond to the acquired k-space measurements at locations specified by Λ and the k-space corresponding to the network output image at the same locations”).
Regarding claim(s) 7, Schlemper teaches the method of claim 1, wherein the machine-learning model is a dual-domain (Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image”).
Schlemper fails to teaches self-supervised. However, Yaman teaches self-supervised model (See Page 5-6, Chapter: “Proposed Self-supervised Training without Fully-Sampled Reference Data”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Schlemper and Yaman before the effective filing date of the claimed invention. The motivation for this combination of references would have been to apply Yaman’s self-supervised training strategy to Schlemper’s dual-domain reconstruction model in order to eliminate the need for fully sampled reference data (Yaman, Page 12-16). This motivation for the combination of Schlemper and Yaman is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 9, Schlemper teaches a method, comprising:
training, by one or more processors coupled to a non-transitory memory (Figure 19; and Paragraph [0256]), based on a first loss value and a second loss value, a machine-learning model that generates magnetic resonance (MR) images from MR spatial frequency data, wherein training the machine-learning model (Figure 9B; Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function. In some embodiments, the spatial frequency domain loss function includes mean-squared error”; and Paragraph [0158]: “the loss function includes a first loss function to capture error in the spatial frequency domain and a second loss function to capture error in the image domain [...] The first and second measures of error may be combined (e.g., via a weighted combination) to produce an overall measure of error, which is to be minimized during the training process”) comprises:
calculating, by the one or more processors, the first loss value based on a first output of the machine-learning model generated using (Paragraph [0078]: “the neural network model to reconstruct MR images from spatial frequency data may include multiple neural network blocks each of which includes a plurality of convolutional layers configured to receive as input: (1) image domain data (e.g., representing the networks current reconstruction of the MR data); and (2) output obtained by applying an adjoint non-uniform Fourier transformation to the input MR spatial frequency data”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function”); and
calculating, by the one or more processors, the second loss value based on (1) the input MR spatial frequency data and a transformation of the first output of the machine-learning model (Paragraph [0078]: “the neural network model to reconstruct MR images from spatial frequency data may include multiple neural network blocks each of which includes a plurality of convolutional layers configured to receive as input: (1) image domain data (e.g., representing the networks current reconstruction of the MR data); and (2) output obtained by applying an adjoint non-uniform Fourier transformation to the input MR spatial frequency data”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function”), or (2) (Paragraph [0078]: “the neural network model to reconstruct MR images from spatial frequency data may include multiple neural network blocks each of which includes a plurality of convolutional layers configured to receive as input: (1) image domain data (e.g., representing the networks current reconstruction of the MR data); and (2) output obtained by applying an adjoint non-uniform Fourier transformation to the input MR spatial frequency data”; and Paragraph [0085]: “the neural network model used for generating MR images from under-sampled spatial frequency data may be trained using a loss function comprising a spatial frequency domain loss function and an image domain loss function. In some embodiments, the loss function is a weighted sum of the spatial frequency domain loss function and the image domain loss function”).
Schlemper fails to teach to calculating, by the one or more processors, the first loss value based on a first output of the machine-learning model generated using a first partition of input MR spatial frequency data
However, Yaman teaches to calculating, by the one or more processors (Page 7, 1st Paragraph), the first loss value based on a first output of the machine-learning model generated using a first partition of input MR spatial frequency data (Figure 2; Equation (9); Page 5, 3rd Paragraph: “the acquired sub-sampled data indices, Ω from each scan is divided into two sets Θ and Λ as Ω = Θ ∪ Λ”; and Page 5, Last Paragraph: “self-supervised training methodology, the unrolled network only sees the acquired k-space data at locations Θ = Ω\Λ to enforce data consistency”)
calculating, by the one or more processors (Page 7, 1st Paragraph), the second loss value based on (1) the input MR spatial frequency data and a transformation of the first output of the machine-learning model, or (2) a partition of the transformation of the first output and a second partition of the input MR spatial frequency data (Equation (10); Page 13, Last Paragraph – Page 14, 1st Paragraph: “the training database is partitioned into two sets of complementary datasets […] In our approach, we do a similar partitioning of the acquired data to two sets we denoted Θ and Λ […] the intuition for partitioning within the network is similar, as the unrolled network only sees Θ for data consistency during training, while Λ is only used to establish the network loss”; Page 5, last Paragraph: “the unrolled network output image […] data consistency is transformed to k-space using the encoding operator […] Then the loss is calculated in k-space with respect to the acquired k-space data at these locations”; and Page 7, 1st Paragraph: “for the proposed self-supervised training these correspond to the acquired k-space measurements at locations specified by Λ and the k-space corresponding to the network output image at the same locations”).
Schlemper teaches training a machine-learning model that generates MR images from MR spatial frequency data based on multiple loss terms, including spatial frequency-domain and image-domain losses that are minimized during training. Yaman teaches partitioning acquired k-space data Ω into disjoint subsets Θ and Λ (Ω = Θ ∪ Λ), generating a first output using a first partition Θ, transforming the network output image into k-space using an encoding operator EΛ, and calculating a loss based on the transformed output at Λ locations and the corresponding acquired MR spatial frequency data.
Therefore, it would have been obvious to one of ordinary skill in the art to combine Schlemper and Yaman before the effective filing date of the claimed invention. The motivation for this combination of Yaman’s partition-based loss computation into Schlemper’s reconstruction training framework to enable training based on partitioned MR spatial frequency data and transformed outputs without requiring fully sampled reference data (Yaman, Page 12-16). This motivation for the combination of Schlemper and Yaman is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 10, Schlemper as modified by Yaman teaches the method of claim 9, where Yaman teaches further comprising:
generating, by the one or more processors (Page 7, 1st Paragraph), the first partition of the input MR spatial frequency data by selecting a first subset of the input MR spatial frequency data (Figure 2; Equation (9); Page 5, 3rd Paragraph: “the acquired sub-sampled data indices, Ω from each scan is divided into two sets Θ and Λ as Ω = Θ ∪ Λ”; and Page 5, Last Paragraph: “self-supervised training methodology, the unrolled network only sees the acquired k-space data at locations Θ = Ω\Λ to enforce data consistency”); and
generating, by the one or more processors (Page 7, 1st Paragraph), the second partition of the input MR spatial frequency data by selecting a second subset of the input MR spatial frequency data (Equation (10); Page 13, Last Paragraph – Page 14, 1st Paragraph: “the training database is partitioned into two sets of complementary datasets […] In our approach, we do a similar partitioning of the acquired data to two sets we denoted Θ and Λ […] the intuition for partitioning within the network is similar, as the unrolled network only sees Θ for data consistency during training, while Λ is only used to establish the network loss”; Page 5, last Paragraph: “the unrolled network output image […] data consistency is transformed to k-space using the encoding operator […] Then the loss is calculated in k-space with respect to the acquired k-space data at these locations”; and Page 7, 1st Paragraph: “for the proposed self-supervised training these correspond to the acquired k-space measurements at locations specified by Λ and the k-space corresponding to the network output image at the same locations”).
Regarding claim(s) 11, Schlemper as modified by Yaman teaches the method of claim 9, where Schlemper teaches wherein the machine-learning model comprises a plurality of data consistency layers and a plurality of convolutional layers, and wherein the plurality of convolutional layers and the plurality of data consistency layers are arranged in a plurality of blocks, such that each of the plurality of blocks comprises at least one convolutional layer and at least one data consistency layer (Figure 1A-1C; Paragraph [0062]: “Additionally or alternatively, the first neural network sub-model may include at least one locally-connected layer, at least one data consistency layer, and/or at least one complex-conjugate symmetry layer”; Paragraph [0078]: “the neural network model to reconstruct MR images from spatial frequency data may include multiple neural network blocks each of which includes a plurality of convolutional layers”; and Paragraph [0091]: “the first neural network sub-model 102 includes one or more convolutional layers 104, a locally-connected layer 106, one or more transposed convolutional layers 108, a residual connection 109, complex-conjugate symmetry layer 105 and a data consistency layer 110.”).
Regarding claim(s) 12, Schlemper as modified by Yaman teaches the method of claim 9, where Schlemper teaches wherein the machine-learning model is a dual-domain (Paragraph [0057]: “Deep learning techniques have also been used for reconstructing MR images from under-sampled k-space data”; and Paragraph [0061]: “(1) processing the input MR spatial frequency data using the first neural network sub-model to obtain output MR spatial frequency data; (2) transforming the output MR spatial frequency data to the image domain to obtain input image-domain data; and (3) processing the input image-domain data using the second neural network sub-model to obtain the MR image.”), and wherein the machine-learning model is for reconstruction of non-Cartesian MRI data (Paragraph [0058]: “deep learning techniques for generating high-quality MR images from under-sampled spatial frequency data that: (1) operate both in the spatial frequency domain and in the image domain; and (2) enable reconstruction of MR images from non-Cartesian sampling trajectories”),
Additionally, where Yaman teaches wherein the machine-learning model is a dual-domain self-supervised model, wherein the machine-learning model is self-supervised in both k-space and image-based domains(Equation (9); Equation (10); Page 5-6, Chapter: “Proposed Self-supervised Training without Fully-Sampled Reference Data”; Page 5, 3rd Paragraph: “the acquired sub-sampled data indices, Ω from each scan is divided into two sets Θ and Λ as Ω = Θ ∪ Λ”; and Page 5, Last Paragraph: “the loss is calculated in k-space with respect to the acquired k-space data at these locations […] self-supervised training methodology, the unrolled network only sees the acquired k-space data at locations Θ = Ω\Λ to enforce data consistency”).
Regarding claim(s) 13, Schlemper as modified by Yaman teaches the method of claim 9, where Schlemper teaches further comprising receiving patient MR data and feeding the patient MR data to the machine-learning model to obtain a reconstructed image based on the patient MR data (Paragraph [0060]: “generating an MR image from under-sampled spatial frequency domain data, the method comprising generating a magnetic resonance (MR) image from input MR spatial frequency data using a neural network model that comprises: (1) a first neural network sub-model configured to process spatial frequency domain data; and (2) a second neural network sub-model configured to process image domain data”; Paragraph [0069]: “multiple deep-learning techniques for reconstructing MR images from data obtained using non-Cartesian sampling trajectories”; and Paragraph [0254]: “portable MRI system 3900 that has been transported to a patient's bedside to perform a scan of the patient's knee”).
Relevant Prior Art Directed to State of Art
Arberet et al (US 2021/0150783 A1) are relevant prior art not applied in the rejection(s) above. Arberet discloses a method for reconstruction of a magnetic resonance (MR) image in an MR system, the method comprising: scanning, by the MR system, a patient with an MR sequence, the scanning resulting in first k-space measurements; reconstructing, by an image processor, the MR image from the first k-space measurements, the reconstructing inputting the first k-space data to a deep machine-learned network, the deep machine-learned network applying values for variables previously trained using unsupervised learning from multiple samples of second k-space measurements from patients, phantoms, and/or simulated MR, the previous training being from the samples without ground truths; and displaying the MR image.
Lazarus et al (US 2020/0058106 A1) are relevant prior art not applied in the rejection(s) above. Lazarus discloses a method, comprising: obtaining input magnetic resonance (MR) data using at least one radio-frequency (RF) coil of a magnetic resonance imaging (MRI) system; and generating an MR image from the input MR data at least in part by using a neural network model to suppress at least one artefact in the input MR data, wherein the neural network model comprises a first neural network portion configured to process data in a spatial frequency domain; and wherein using the neural network model to suppress the at least one artefact in the input MR data comprises processing, with the first neural network portion, spatial frequency domain data obtained from the input MR data, wherein the neural network model comprises a first neural network portion configured to process the input MR data in a domain other than the image domain.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONGBONG NAH whose telephone number is (571) 272-1361. The examiner can normally be reached M - F: 9:00 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONGBONG NAH/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674