Prosecution Insights
Last updated: April 19, 2026
Application No. 18/472,077

HIGH SPATIOTEMPORAL FIDELITY MRI SYSTEM UTILIZING SELF-SUPERVISED LEARNING WITH SELF-SUPERVISED REGULARIZATION RECONSTRUCTION METHODOLOGY AND ASSOCIATED METHOD OF USE

Final Rejection §103
Filed
Sep 21, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
The Curators of the University of Missouri
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending for examination in the application filed 12/17/2025. Claims 1-3, 8, and 16-17 have been amended. Priority Acknowledgement is made of Applicant’s claim to priority of provisional application 63/376,529, filing date 09/21/2022. Response to Arguments and Amendments The objections of claims 1, 8, and 17 are withdrawn in view of the amendments. The 35 U.S.C. 112(b) rejection of claim 1 is withdrawn in view of the amendments. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument, as facilitated by the newly added amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-8, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US20230135995A1) in view of Yaman (Yaman, B., Gu, H., Hosseini, S. A. H., Demirel, O. B., Moeller, S., Ellermann, J., ... & Akçakaya, M. (2022). Multi‐mask self‐supervised learning for physics‐guided neural networks in highly accelerated magnetic resonance imaging. NMR in Biomedicine, 35(12), e4798) and Hu (Hu, C., Li, C., Wang, H., Liu, Q., Zheng, H., Wang, S. (2021). Self-supervised Learning for MRI Reconstruction with a Parallel Network Training Framework. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12906. Springer, Cham). Regarding claim 1, Chen teaches system to process images to improve the quality of MRI images ([0002] Described herein are systems, methods, and instrumentalities associated with reconstructing magnetic resonance imaging (MRI) images based on a simultaneous multi-slice (e.g., two or more) dataset comprising under-sampled MRI data (e.g., MRI imagery or k-space data). [0001] The collection of k-space data may be a slow process and, as such, under-sampling may be applied to accelerate the operation. The under-sampled k-space data may then be reconstructed (e.g., into an MRI image) to obtain results having a similar quality as a fully-sampled dataset (e.g., a fully-sample MRI image)), comprising: an MRI ([0014] FIG. 1 is a block diagram illustrating an example system 100 for processing a simultaneous multi-slice (SMS) dataset 102 collected by a magnetic resonance imaging (MRI) device (e.g., an MRI scanner)); a processor (processor 602); and a memory, enabled to store data in electronic communication with the processor ([0034] The mass storage device 608 may include one or more magnetic disks such as one or more internal hard disks, one or more removable disks, one or more magneto-optical disks, one or more CD-ROM or DVD-ROM disks, etc., on which instructions and/or data may be stored to facilitate the operation of the processor 602), wherein the memory is able to receive image data of a dynamic scene from the MRI ([0034] The mass storage device 608 may include one or more magnetic disks such as one or more internal hard disks, one or more removable disks, one or more magneto-optical disks, one or more CD-ROM or DVD-ROM disks, etc., on which instructions and/or data may be stored to facilitate the operation of the processor 602. [0014] The SMS dataset 102 may also include imagery data (e.g., one or more MRI images) that visually depicts the anatomical structure based on the k-space data collected by the MRI device. These images may include a single static image or multiple dynamic images (e.g., multi-contrast images) that may be derived, for example, by applying a Fourier transform (e.g., inverse fast Fourier transform (FFT)) to the collected k-space data), and the processor is able to utilize a model based on a physics-guided Siamese network structure ([0018] Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities. The example in FIG. 3 shows that the sub-networks (e.g., 308a and 308b) may be configured to form a Siamese neural network) utilizing an encoding matrix with coil sensitivity maps and an undersampling mask that is converted to an intermediate fully sampled model deep learning ([0005] In examples, the first under-sampled MRI data comprised in the SMS dataset may include MRI data that are acquired using a first set of one or more coils. The second under-sampled MRI data comprised in the SMS dataset may include MRI data acquired using a second set of one or more coils. In these examples, respective coil sensitivity maps associated with the first set of one or more coils and the second set of one or more coils may be determined and used to estimate the k-space data described above. [0027] Once obtained, the coil sensitivity maps associated with the coils may be applied (e.g., by the ANN 304 and/or the DC checker 310) along with the Fourier transforms to reconstruct the multi-slice MRI data. For instance, MRI data (e.g., MRI images) associated with the multiple coils may be multiplied with corresponding complex conjugates of the coil sensitivity maps and then summed together to obtain coil-combined MRI images that may then be provided to the sub-networks 308a, 308b for denoising) and then reconstructed into unsampled k-space through a second model deep learning process (Fig. 4 408: perform data consistency check and reconstruct k-space associated with the multi-slice MRI data based on the intermediate MRI image); wherein the network structure includes physics-guided data augmentation and a network consistency concept ([0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data. ([0002] The training may further include determining a combined training loss (e.g., such as an average loss, a triplet loss, etc.) by jointly considering a first training loss associated with the first estimated MRI image and a second training loss associated with the second estimated MRI image, and adjusting parameters of the instance of the ANN based on a gradient descent of the combined training loss). Chen does not teach a self-supervised learning with self-supervised regularization model; a re-undersampling block. Yaman, in the same field of endeavor of MRI acceleration, teaches a self-supervised learning with self-supervised regularization model; a re-undersampling block ([pg. 4 para. 2] The 3D k-space datasets were inverse Fourier-transformed along the read-out direction, and these slices were processed individually. The knee and brain datasets were retrospectively undersampled to R = 8 using a uniform sheared 2D undersampling pattern.37 Additionally, for the knee datasets, where a fully sampled reference was available, further undersampling was performed at R = 8 using uniform 1D and 2D (ky-kz) random, and 1D and 2D and Poisson undersampling masks. The undersampling masks are provided in Figure S1). PNG media_image1.png 795 1357 media_image1.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use a self-LR model and a re-undersampling block because "supervised training becomes inoperative in the absence of fully sampled data…self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Chen does not teach the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function. Hu, in the same field of endeavor of self-supervised learning for MRI, teaches the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function ([Abstract] Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery. Two reconstruction losses are defined on all the scanned data points to enhance the network’s capability of recovering the frequency information. Meanwhile, to constrain the learned unscanned data points of the network, a difference loss is designed to enforce consistency between the two parallel networks). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Hu to configure the physics-guided data augmentation and network consistency concept to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function because "it is difficult to obtain fully-sampled data in many scenarios due to physiological constraints or physical constraints. Recently, a self-supervised learning method (self-supervised learning via data undersampling, SSDU) was proposed specifically to solve the issue, where the undersampled data is split into two disjoint sets. One is treated as the input and the other is used to define the loss. Despite the impressive reconstruction performance achieved, there are two important issues. First, the two sets need to be split with caution. When the second set does not contain enough data, the training process becomes unstable. Second, since no constraint is imposed on the unscanned data points, there is no guarantee that the final outputs are the expected high-quality images and high uncertainties exist" [Hu pg. 2 para. 2]. Regarding claim 4, Chen, Yaman, and Hu teach the system of claim 1. Yaman teaches wherein the process utilizes deep learning priors. ([pg. 5 para. 1] The iterative optimization problem in Equations (3) and (4) was unrolled for T = 10 iterations. Conjugate gradient descent was used in DC units of the unrolled network.20, 31 The proximal operator corresponding to the solution of Equation (3) employs the ResNet structure used in SSDU.31 It comprises input and output convolution layers and 15 residual blocks (RBs) each containing two convolutional layers, where the first layer is followed by a rectified linear unit (ReLU) and the second layer is followed by a constant multiplication layer. All layers had a kernel size of 3 × 3, 64 channels. The unrolled network, which shares parameters across the unrolled iterations, had a total of 592,129 trainable parameters). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use deep learning priors because "In SSDU, the available measurements are split into two disjoint sets by a masking operation, which reduces the sensitivity to overfitting and is central for reliable performance. One of these sets is used in the DC units of the network, and the other is used to define the loss function in k-space" [Yaman pg. 2 para. 3]. Regarding claim 5, Chen, Yaman, and Hu teach the system of claim 4. Yaman teaches wherein deep learning priors include a physics-guided network that uses a ResNet structure with a predetermined number of residual blocks and is unrolled for a predetermined number of iterations ([pg. 5 para. 1] The iterative optimization problem in Equations (3) and (4) was unrolled for T = 10 iterations. Conjugate gradient descent was used in DC units of the unrolled network.20, 31 The proximal operator corresponding to the solution of Equation (3) employs the ResNet structure used in SSDU.31 It comprises input and output convolution layers and 15 residual blocks (RBs) each containing two convolutional layers, where the first layer is followed by a rectified linear unit (ReLU) and the second layer is followed by a constant multiplication layer. All layers had a kernel size of 3 × 3, 64 channels. The unrolled network, which shares parameters across the unrolled iterations, had a total of 592,129 trainable parameters). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use ResNet and is unrolled because "In SSDU, the available measurements are split into two disjoint sets by a masking operation, which reduces the sensitivity to overfitting and is central for reliable performance. One of these sets is used in the DC units of the network, and the other is used to define the loss function in k-space" [Yaman pg. 2 para. 3]. Regarding claim 6, Chen, Yaman, and Hu teach the system of claim 1. Yaman teaches wherein the re-undersampling includes an intermediate multicoil k-space followed by random undersampling followed by generating a coil combined image (See Fig. 1. [pg. 4 para. 2] The 3D k-space datasets were inverse Fourier-transformed along the read-out direction, and these slices were processed individually. The knee and brain datasets were retrospectively undersampled to R = 8 using a uniform sheared 2D undersampling pattern. Additionally, for the knee datasets, where a fully sampled reference was available, further undersampling was performed at R = 8 using uniform 1D and 2D (ky-kz) random, and 1D and 2D and Poisson undersampling masks. The undersampling masks are provided in Figure S1. As in SSDU, a ResNet structure was used for the regularizer in Equation (3), where the network parameters were shared across the unrolled network. Coil sensitivity maps were generated from 24 x 24 center of k-space using ESPIRiT). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use re-undersampling because "self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Regarding claim 7, Chen, Yaman, and Hu teach the system of claim 1. Yaman teaches wherein the intermediate fully sampled model deep learning includes a series of iterations, each including ResNet for deep residual learning for image reconstruction followed by data consistency analysis ([pg. 5 para. 1] The iterative optimization problem in Equations (3) and (4) was unrolled for T = 10 iterations. Conjugate gradient descent was used in DC units of the unrolled network.20, 31 The proximal operator corresponding to the solution of Equation (3) employs the ResNet structure used in SSDU.31 It comprises input and output convolution layers and 15 residual blocks (RBs) each containing two convolutional layers, where the first layer is followed by a rectified linear unit (ReLU) and the second layer is followed by a constant multiplication layer. All layers had a kernel size of 3 × 3, 64 channels. The unrolled network, which shares parameters across the unrolled iterations, had a total of 592,129 trainable parameters). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use ResNet and data consistency analysis because "In SSDU, the available measurements are split into two disjoint sets by a masking operation, which reduces the sensitivity to overfitting and is central for reliable performance. One of these sets is used in the DC units of the network, and the other is used to define the loss function in k-space" [Yaman pg. 2 para. 3]. Regarding claim 8, Chen teaches a system to process images, for single band and/or multiband acceleration, to improve the quality of MRI images ([0002] Described herein are systems, methods, and instrumentalities associated with reconstructing magnetic resonance imaging (MRI) images based on a simultaneous multi-slice (e.g., two or more) dataset comprising under-sampled MRI data (e.g., MRI imagery or k-space data). [0001] The collection of k-space data may be a slow process and, as such, under-sampling may be applied to accelerate the operation. The under-sampled k-space data may then be reconstructed (e.g., into an MRI image) to obtain results having a similar quality as a fully-sampled dataset (e.g., a fully-sample MRI image)), comprising: an MRI ([0014] FIG. 1 is a block diagram illustrating an example system 100 for processing a simultaneous multi-slice (SMS) dataset 102 collected by a magnetic resonance imaging (MRI) device (e.g., an MRI scanner)); a processor (processor 602); and a memory, enabled to store data in electronic communication with the processor ([0034] The mass storage device 608 may include one or more magnetic disks such as one or more internal hard disks, one or more removable disks, one or more magneto-optical disks, one or more CD-ROM or DVD-ROM disks, etc., on which instructions and/or data may be stored to facilitate the operation of the processor 602), wherein the memory is able to receive image data of a dynamic scene from the MRI ([0034] The mass storage device 608 may include one or more magnetic disks such as one or more internal hard disks, one or more removable disks, one or more magneto-optical disks, one or more CD-ROM or DVD-ROM disks, etc., on which instructions and/or data may be stored to facilitate the operation of the processor 602. [0014] The SMS dataset 102 may also include imagery data (e.g., one or more MRI images) that visually depicts the anatomical structure based on the k-space data collected by the MRI device. These images may include a single static image or multiple dynamic images (e.g., multi-contrast images) that may be derived, for example, by applying a Fourier transform (e.g., inverse fast Fourier transform (FFT)) to the collected k-space data), and the processor is able to utilize a model based on a physics-guided Siamese network structure ([0018] Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities. The example in FIG. 3 shows that the sub-networks (e.g., 308a and 308b) may be configured to form a Siamese neural network) utilizing an encoding matrix with coil sensitivity maps and an undersampling mask that is converted to a first model deep learning block that communicates with a plurality of physics guided subnets ([0005] In examples, the first under-sampled MRI data comprised in the SMS dataset may include MRI data that are acquired using a first set of one or more coils. The second under-sampled MRI data comprised in the SMS dataset may include MRI data acquired using a second set of one or more coils. In these examples, respective coil sensitivity maps associated with the first set of one or more coils and the second set of one or more coils may be determined and used to estimate the k-space data described above. [0027] Once obtained, the coil sensitivity maps associated with the coils may be applied (e.g., by the ANN 304 and/or the DC checker 310) along with the Fourier transforms to reconstruct the multi-slice MRI data. For instance, MRI data (e.g., MRI images) associated with the multiple coils may be multiplied with corresponding complex conjugates of the coil sensitivity maps and then summed together to obtain coil-combined MRI images that may then be provided to the sub-networks 308a, 308b for denoising. [0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data); wherein the network structure includes physics-guided data augmentation and a network consistency concept ([0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data. ([0002] The training may further include determining a combined training loss (e.g., such as an average loss, a triplet loss, etc.) by jointly considering a first training loss associated with the first estimated MRI image and a second training loss associated with the second estimated MRI image, and adjusting parameters of the instance of the ANN based on a gradient descent of the combined training loss). Chen does not teach a self-supervised learning with self-supervised regularization model; a re-undersampling block. Yaman, in the same field of endeavor of MRI acceleration, teaches a self-supervised learning with self-supervised regularization model; a re-undersampling block (See Yaman Fig. 1 above. [pg. 4 para. 2] The 3D k-space datasets were inverse Fourier-transformed along the read-out direction, and these slices were processed individually. The knee and brain datasets were retrospectively undersampled to R = 8 using a uniform sheared 2D undersampling pattern.37 Additionally, for the knee datasets, where a fully sampled reference was available, further undersampling was performed at R = 8 using uniform 1D and 2D (ky-kz) random, and 1D and 2D and Poisson undersampling masks. The undersampling masks are provided in Figure S1). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use a self-LR model and a re-undersampling block because "supervised training becomes inoperative in the absence of fully sampled data…self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Chen does not teach the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function. Hu, in the same field of endeavor of self-supervised learning for MRI, teaches the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function ([Abstract] Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery. Two reconstruction losses are defined on all the scanned data points to enhance the network’s capability of recovering the frequency information. Meanwhile, to constrain the learned unscanned data points of the network, a difference loss is designed to enforce consistency between the two parallel networks). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Hu to configure the physics-guided data augmentation and network consistency concept to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function because "it is difficult to obtain fully-sampled data in many scenarios due to physiological constraints or physical constraints. Recently, a self-supervised learning method (self-supervised learning via data undersampling, SSDU) was proposed specifically to solve the issue, where the undersampled data is split into two disjoint sets. One is treated as the input and the other is used to define the loss. Despite the impressive reconstruction performance achieved, there are two important issues. First, the two sets need to be split with caution. When the second set does not contain enough data, the training process becomes unstable. Second, since no constraint is imposed on the unscanned data points, there is no guarantee that the final outputs are the expected high-quality images and high uncertainties exist" [Hu pg. 2 para. 2]. Regarding claim 15, Chen, Yaman, and Hu teach the system of claim 8. Yaman teaches wherein the re-undersampling includes an intermediate multicoil k-space followed by random undersampling that utilizes a design comparable to the original undersampling mask followed by generating a coil combined image (See Fig. 1. [pg. 4 para. 2] The 3D k-space datasets were inverse Fourier-transformed along the read-out direction, and these slices were processed individually. The knee and brain datasets were retrospectively undersampled to R = 8 using a uniform sheared 2D undersampling pattern. Additionally, for the knee datasets, where a fully sampled reference was available, further undersampling was performed at R = 8 using uniform 1D and 2D (ky-kz) random, and 1D and 2D and Poisson undersampling masks. The undersampling masks are provided in Figure S1. As in SSDU, a ResNet structure was used for the regularizer in Equation (3), where the network parameters were shared across the unrolled network. Coil sensitivity maps were generated from 24 x 24 center of k-space using ESPIRiT). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use re-undersampling because "self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Regarding claim 16, Chen, Yaman, and Hu teach the system of claim 8. Chen further teaches the physics-guided Siamese network ([0018] Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities. The example in FIG. 3 shows that the sub-networks (e.g., 308a and 308b) may be configured to form a Siamese neural network), which includes the physics-guided data augmentation ([0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data), and the physics-guided network consistency concept that is included in the loss function ([0002] The training may further include determining a combined training loss (e.g., such as an average loss, a triplet loss, etc.) by jointly considering a first training loss associated with the first estimated MRI image and a second training loss associated with the second estimated MRI image, and adjusting parameters of the instance of the ANN based on a gradient descent of the combined training loss). Chen does not teach the self-supervised learning with self-supervised regularization model. Yaman teaches the self-supervised learning with self-supervised regularization model (see Yaman Fig. 1 above) Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use a self-LR model because "supervised training becomes inoperative in the absence of fully sampled data…self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Regarding claim 17, Chen teaches a method for processing images, that are single and/or multiband, to improve the quality of MRI images ([0002] Described herein are systems, methods, and instrumentalities associated with reconstructing magnetic resonance imaging (MRI) images based on a simultaneous multi-slice (e.g., two or more) dataset comprising under-sampled MRI data (e.g., MRI imagery or k-space data). [0001] The collection of k-space data may be a slow process and, as such, under-sampling may be applied to accelerate the operation. The under-sampled k-space data may then be reconstructed (e.g., into an MRI image) to obtain results having a similar quality as a fully-sampled dataset (e.g., a fully-sample MRI image)), comprising: utilizing a processor in electronic communication with a memory ([0034] The mass storage device 608 may include one or more magnetic disks such as one or more internal hard disks, one or more removable disks, one or more magneto-optical disks, one or more CD-ROM or DVD-ROM disks, etc., on which instructions and/or data may be stored to facilitate the operation of the processor 602), wherein the memory is able to receive image data of an image from an MRI ([0014] The SMS dataset 102 may also include imagery data (e.g., one or more MRI images) that visually depicts the anatomical structure based on the k-space data collected by the MRI device. These images may include a single static image or multiple dynamic images (e.g., multi-contrast images) that may be derived, for example, by applying a Fourier transform (e.g., inverse fast Fourier transform (FFT)) to the collected k-space data), and utilizing the processor with a model based on a physics-guided Siamese network structure ([0018] Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities. The example in FIG. 3 shows that the sub-networks (e.g., 308a and 308b) may be configured to form a Siamese neural network) utilizing an encoding matrix with coil sensitivity maps and an undersampling mask that is converted to a first model deep learning block that communicates with a plurality of physics guided subnet ([0005] In examples, the first under-sampled MRI data comprised in the SMS dataset may include MRI data that are acquired using a first set of one or more coils. The second under-sampled MRI data comprised in the SMS dataset may include MRI data acquired using a second set of one or more coils. In these examples, respective coil sensitivity maps associated with the first set of one or more coils and the second set of one or more coils may be determined and used to estimate the k-space data described above. [0027] Once obtained, the coil sensitivity maps associated with the coils may be applied (e.g., by the ANN 304 and/or the DC checker 310) along with the Fourier transforms to reconstruct the multi-slice MRI data. For instance, MRI data (e.g., MRI images) associated with the multiple coils may be multiplied with corresponding complex conjugates of the coil sensitivity maps and then summed together to obtain coil-combined MRI images that may then be provided to the sub-networks 308a, 308b for denoising. [0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data); wherein the network structure includes physics-guided data augmentation and a network consistency concept ([0029] FIG. 5 illustrates an example process 500 for training a neural network (e.g., an instance of the ANN 104 of FIG. 1 and/or ANN 304 of FIG. 3) to perform the multi-slice MRI data processing operations described herein. The training may be performed using data collected from practical MRI procedures (e.g., under-sampled multi-slice MRI data acquired using an SMS technique), and/or computer-simulated or computer-augmented MRI data. ([0002] The training may further include determining a combined training loss (e.g., such as an average loss, a triplet loss, etc.) by jointly considering a first training loss associated with the first estimated MRI image and a second training loss associated with the second estimated MRI image, and adjusting parameters of the instance of the ANN based on a gradient descent of the combined training loss). Chen does not teach a self-supervised learning with self-supervised regularization model; a re-undersampling block. Yaman, in the same field of endeavor of MRI acceleration, teaches a self-supervised learning with self-supervised regularization model; a re-undersampling block (See Yaman Fig. 1 above. [pg. 4 para. 2] The 3D k-space datasets were inverse Fourier-transformed along the read-out direction, and these slices were processed individually. The knee and brain datasets were retrospectively undersampled to R = 8 using a uniform sheared 2D undersampling pattern.37 Additionally, for the knee datasets, where a fully sampled reference was available, further undersampling was performed at R = 8 using uniform 1D and 2D (ky-kz) random, and 1D and 2D and Poisson undersampling masks. The undersampling masks are provided in Figure S1). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Yaman to use a self-LR model and a re-undersampling block because "supervised training becomes inoperative in the absence of fully sampled data…self-supervision via data undersampling (SSDU) trains physics-guided neural networks by utilizing only the acquired subsampled measurements" [Yaman pg. 2 para. 3]. Chen does not teach the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function. Hu, in the same field of endeavor of self-supervised learning for MRI, teaches the physics-guided data augmentation and network consistency concept configured to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function ([Abstract] Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery. Two reconstruction losses are defined on all the scanned data points to enhance the network’s capability of recovering the frequency information. Meanwhile, to constrain the learned unscanned data points of the network, a difference loss is designed to enforce consistency between the two parallel networks). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Hu to configure the physics-guided data augmentation and network consistency concept to allow for all acquired data to be utilized for data consistency purposes and calculation of a loss function because "it is difficult to obtain fully-sampled data in many scenarios due to physiological constraints or physical constraints. Recently, a self-supervised learning method (self-supervised learning via data undersampling, SSDU) was proposed specifically to solve the issue, where the undersampled data is split into two disjoint sets. One is treated as the input and the other is used to define the loss. Despite the impressive reconstruction performance achieved, there are two important issues. First, the two sets need to be split with caution. When the second set does not contain enough data, the training process becomes unstable. Second, since no constraint is imposed on the unscanned data points, there is no guarantee that the final outputs are the expected high-quality images and high uncertainties exist" [Hu pg. 2 para. 2]. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Yaman, Hu, and Mailhe (US11422217B2). Regarding claim 2, Chen, Yaman, and Hu teach the system of claim 1. Chen teaches a denoise block to control noise from undersampling ([0022] For example, the DC checker 310 may be configured to receive the MRI data produced by the Siamese network (e.g., denoised first and second intermediate MRI images respectively predicted by the sub-networks 308a and 308b based on the input), process the data (e.g., the first and second intermediate MRI images) to derive corresponding MRI (e.g., k-space data), and obtain respective MRI images (e.g., disentangled MRI images 306a and 306b) corresponding to the multiple slices of the SMS dataset 302 based on the derived MRI data (e.g., by applying an inverse Fourier transform such as an inverse FFT to the derived MRI data)). Chen does not teach a denoise block to control noise from undersampling, wherein, optionally, the denoise block utilizes Unet operating on the unsampled k-space through a second model deep learning process. Mailhe, in the same field of endeavor of MRI acceleration, teaches a denoise block to control noise from undersampling, wherein, optionally, the denoise block utilizes Unet operating on the unsampled k-space through a second model deep learning process ([col. 10 ln. 26-39] In one embodiment, the GAN being progressively trained is an image-to-image network trained to act as a regularizer in the reconstruction. PGAN is adapted into the image-to-image neural network architecture. FIG. 3 shows an example. FIG. 3 shows a GAN formed by the generator 301 and the discriminator 330. The generator 301 receives the image 300 (e.g., data representing the patient in the object or image domain) and outputs a denoised or regularized image 328. The discriminator 330 determines whether the image 328 is estimated (i.e., made up by the generator 301) or is an actual image without noise or artifact. The generator 301 is an image-to-image network which receives an input image 300 and outputs an image 328. Any image-to-image network may be used, such as a U-net. [col. 6 ln. 47-56] A is the MRI model to connect the image to MRI-space (k-space), which can involve a combination of an under-sampling matrix U, a Fourier transform F, and sensitivity maps S. T represents a sparsifying (shrinkage) transform. λ is a regularization parameter. The first term of the right side of equation 1 represents the image (2D or 3D spatial distribution or representation) fit to the acquired data, and the second term of the right side is a term added for denoising by reduction of artifacts (e.g., aliasing) due to under sampling). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Mailhe to use a denoise block "for reconstruction of a magnetic resonance (MR) image in an MR system" [Mailhe col. 1 ln. 65-66]. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Yaman, Hu, Mailhe, and He (Chen, X., & He, K. (2021). Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 15750-15758). Regarding claim 3, Chen, Yaman, and Hu teach the system of claim 1. Mailhe teaches an additional denoise block, wherein, optionally, the additional denoise block utilizes an additional Unet ([col. 8 ln. 21-29] Alternatively, a different regularizer (i.e., generator of PGAN) is provided for each iteration. Different PGANs are trained for different iterations in the reconstruction. Each generator and/or PGAN may have the same architecture, but each is separately learned so that different values of the learnable parameters may be provided for different iterations of the reconstruction. Each generator for each reconstruction iteration is progressively trained, such as training separate image-to-image networks. [col. 10 ln. 37-39] The generator 301 is an image-to-image network which receives an input image 300 and outputs an image 328. Any image-to-image network may be used, such as a U-net). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Mailhe "so that different values of the learnable parameters may be provided for different iterations of the reconstruction" [Mailhe col. 8 ln 25-27]. He, in the same field of endeavor of Siamese networks, teaches a stop-gradient (Figure 1). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Claims 9-13 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Yaman, Hu and He. Regarding claim 9, Chen, Yaman, and Hu teach the system of claim 8. He, in the same field of endeavor of Siamese networks, teaches wherein the first model deep learning block includes a stop-gradient. PNG media_image2.png 634 825 media_image2.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Regarding claim 10, Chen, Yaman, and Hu teach the system of claim 8. Chen further teaches wherein the plurality of physics-guided subnets includes one block with backpropagation ([0031] If the determination at 512 is that the training termination criteria are not satisfied, the neural network may at 516 adjust its parameters by backpropagating the training loss (e.g., based on a gradient descent associated with the training loss) through the neural network). Chen does not teach the remainder of physics-guided subnets include a stop-gradient with shared weights with only one subnet updating weights during backpropagation and the other subnets using stop-gradient to prevent collapsing. He teaches the remainder of physics-guided subnets include a stop-gradient with shared weights with only one subnet updating weights during backpropagation and the other subnets using stop-gradient to prevent collapsing (See Fig. 1. [pg. 15751 para. 8] Our empirical study challenges the necessity of the momentum encoder for preventing collapsing. We discover that the stop-gradient operation is critical. This discovery can be obscured with the usage of a momentum encoder, which is always accompanied with stop-gradient (as it is not updated by its parameters’ gradients)). PNG media_image3.png 895 840 media_image3.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Regarding claim 11, Chen, Yaman, Hu, and He teach the system of claim 10. Chen further teaches wherein the physics-guided subnet that includes one block with backpropagation includes a second model deep learning block connected to a denoise block ([0022] For example, the DC checker 310 may be configured to receive the MRI data produced by the Siamese network (e.g., denoised first and second intermediate MRI images respectively predicted by the sub-networks 308a and 308b based on the input), process the data (e.g., the first and second intermediate MRI images) to derive corresponding MRI (e.g., k-space data), and obtain respective MRI images (e.g., disentangled MRI images 306a and 306b) corresponding to the multiple slices of the SMS dataset 302 based on the derived MRI data (e.g., by applying an inverse Fourier transform such as an inverse FFT to the derived MRI data)). Regarding claim 12, Chen, Yaman, Hu, and He teach the system of claim 11. Chen further teaches wherein the plurality of physics-guided subnets includes a third model deep learning block connected to a denoise block ([0018] In examples, the ANN 304 may include multiple (e.g., two or more) sub-networks (e.g., 308a and 308b shown in FIG. 3) having identical or substantially similar structures (e.g., in terms of the number of layers, types of layers, number of feature maps or vectors generated by each network, etc.) and/or identical or substantially similar operating parameters (e.g., weights associated with the kernels or filters of each network). Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities). Chen does not teach a stop gradient. He teaches a stop gradient (Figure 1). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Regarding claim 13, Chen, Yaman, Hu, and He teach the system of claim 12. Yaman teaches wherein the first model deep learning block, the second model deep learning block, and the third model deep learning block includes a series of iterations each including an unrolled network including data consistency block and a ResNet for deep residual learning for image reconstruction followed by data consistency analysis ([pg. 5 para. 1] The iterative optimization problem in Equations (3) and (4) was unrolled for T = 10 iterations. Conjugate gradient descent was used in DC units of the unrolled network.20, 31 The proximal operator corresponding to the solution of Equation (3) employs the ResNet structure used in SSDU.31 It comprises input and output convolution layers and 15 residual blocks (RBs) each containing two convolutional layers, where the first layer is followed by a rectified linear unit (ReLU) and the second layer is followed by a constant multiplication layer. All layers had a kernel size of 3 × 3, 64 channels. The unrolled network, which shares parameters across the unrolled iterations, had a total of 592,129 trainable parameters). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Yaman to use an unrolled network including DC and ResNet because "In SSDU, the available measurements are split into two disjoint sets by a masking operation, which reduces the sensitivity to overfitting and is central for reliable performance. One of these sets is used in the DC units of the network, and the other is used to define the loss function in k-space" [Yaman pg. 2 para. 3]. Regarding claim 18, Chen, Yaman, and Hu teach the method of claim 17. Chen further teaches wherein the plurality of physics-guided subnets includes one block with backpropagation ([0031] If the determination at 512 is that the training termination criteria are not satisfied, the neural network may at 516 adjust its parameters by backpropagating the training loss (e.g., based on a gradient descent associated with the training loss) through the neural network). Chen does not teach the remainder with stop-gradient with shared weights with only one subnet updating weights during backpropagation and the other subnets using stop-gradient to prevent collapsing. He teaches the remainder with stop-gradient with shared weights with only one subnet updating weights during backpropagation and the other subnets using stop-gradient to prevent collapsing (See He Fig. 1 and Fig. 3 above. [pg. 15751 para. 8] Our empirical study challenges the necessity of the momentum encoder for preventing collapsing. We discover that the stop-gradient operation is critical. This discovery can be obscured with the usage of a momentum encoder, which is always accompanied with stop-gradient (as it is not updated by its parameters’ gradients)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Regarding claim 19, Chen, Yaman, Hu, and He teach the method of claim 19. Chen further teaches wherein the physics-guided subnet that includes one block with backpropagation includes a second model deep learning block connected to a denoise block ([0022] For example, the DC checker 310 may be configured to receive the MRI data produced by the Siamese network (e.g., denoised first and second intermediate MRI images respectively predicted by the sub-networks 308a and 308b based on the input), process the data (e.g., the first and second intermediate MRI images) to derive corresponding MRI (e.g., k-space data), and obtain respective MRI images (e.g., disentangled MRI images 306a and 306b) corresponding to the multiple slices of the SMS dataset 302 based on the derived MRI data (e.g., by applying an inverse Fourier transform such as an inverse FFT to the derived MRI data)) and the plurality of physics guided subnets with stop gradient includes a third model deep learning block connected to a denoise block ([0018] In examples, the ANN 304 may include multiple (e.g., two or more) sub-networks (e.g., 308a and 308b shown in FIG. 3) having identical or substantially similar structures (e.g., in terms of the number of layers, types of layers, number of feature maps or vectors generated by each network, etc.) and/or identical or substantially similar operating parameters (e.g., weights associated with the kernels or filters of each network). Each of the multiple sub-networks may be trained to process a corresponding MRI slice included in the SMS dataset 302 and, together, the multiple sub-networks may be capable of learning (e.g., identifying) the similarities and/or dissimilarities of the different MRI slices included in the SMS dataset 302 and denoise (e.g., remove artifacts from) the SMS dataset 302 based on the learned (e.g., identified) similarities and/or dissimilarities). Chen does not teach a stop gradient. He teaches a stop gradient (Figure 1). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of He to use a stop-gradient because "collapsing solutions do exist, but a stop-gradient operation (Figure 1) is critical to prevent such solutions" [pg. 15750 para. 4]. Regarding claim 20, Chen, Yaman, Hu and He teach the method of claim 19. Yaman teaches wherein the first model deep learning block, the second model deep learning block, and the third model deep learning block includes a series of iterations each including a ResNet for deep residual learning for image reconstruction followed by data consistency analysis ([pg. 5 para. 1] The iterative optimization problem in Equations (3) and (4) was unrolled for T = 10 iterations. Conjugate gradient descent was used in DC units of the unrolled network.20, 31 The proximal operator corresponding to the solution of Equation (3) employs the ResNet structure used in SSDU.31 It comprises input and output convolution layers and 15 residual blocks (RBs) each containing two convolutional layers, where the first layer is followed by a rectified linear unit (ReLU) and the second layer is followed by a constant multiplication layer. All layers had a kernel size of 3 × 3, 64 channels. The unrolled network, which shares parameters across the unrolled iterations, had a total of 592,129 trainable parameters). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Yaman to use an unrolled network including DC and ResNet because "In SSDU, the available measurements are split into two disjoint sets by a masking operation, which reduces the sensitivity to overfitting and is central for reliable performance. One of these sets is used in the DC units of the network, and the other is used to define the loss function in k-space" [Yaman pg. 2 para. 3]. Claims 14 is rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Yaman, Hu, He, and Mailhe. Regarding claim 14, Chen, Yaman, Hu, and He teach the system of claim 12. Mailhe, in the same field of endeavor of MRI acceleration, teaches wherein the denoise block includes controlling noise from undersampling with Unet operating on the unsampled k-space through a model deep learning process ([col. 10 ln. 26-39] In one embodiment, the GAN being progressively trained is an image-to-image network trained to act as a regularizer in the reconstruction. PGAN is adapted into the image-to-image neural network architecture. FIG. 3 shows an example. FIG. 3 shows a GAN formed by the generator 301 and the discriminator 330. The generator 301 receives the image 300 (e.g., data representing the patient in the object or image domain) and outputs a denoised or regularized image 328. The discriminator 330 determines whether the image 328 is estimated (i.e., made up by the generator 301) or is an actual image without noise or artifact. The generator 301 is an image-to-image network which receives an input image 300 and outputs an image 328. Any image-to-image network may be used, such as a U-net. [col. 6 ln. 47-56] A is the MRI model to connect the image to MRI-space (k-space), which can involve a combination of an under-sampling matrix U, a Fourier transform F, and sensitivity maps S. T represents a sparsifying (shrinkage) transform. λ is a regularization parameter. The first term of the right side of equation 1 represents the image (2D or 3D spatial distribution or representation) fit to the acquired data, and the second term of the right side is a term added for denoising by reduction of artifacts (e.g., aliasing) due to under sampling). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Chen with the teachings of Mailhe to use a denoise block " for reconstruction of a magnetic resonance (MR) image in an MR system" [Mailhe col. 1 ln. 65-66]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Oct 05, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month