DETAILED ACTION Election/Restrictions Applicant’s election without traverse of Invention I, claims 1-15, in the reply filed on 3/6/2026 is acknowledged. Claims 16-28 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 3/6/2026. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.— The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim 11 recites the limitation the “ computer-readable medium of claim 9 ”, but claim 9 is directed to a method claim. The examiner will interpret the claim to be directed towards claim 10 . Appropriate correction is required . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) 1 , 3- 5, 7-10, and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al (NPL: Image reconstruction by domain transform manifold learning ) in view of Zhang et al ( US 20190347772 ). Regarding claim 1, Zhu discloses a method for medical imaging, comprising: collecting a first medical image of a first patient from a database (pg. 15 Fig. 1 An optimal mapping between sensor domain and image domain is determined via supervised learning of sensor (top) and image (bottom) domain pairs ) ; training a neural network using the training dataset (pg. 2 We implemented our unified reconstruction framework with a deep neural network feed-forward architecture composed of fully-connected layers followed by a sparse convolutional autoencoder (Fig. 1c) ) ; after training the neural network using the training dataset, applying sensor data acquired from a second patient using a medical imaging system as an input to the neural network (pg. 2 Automated Transform by Manifold Approximation (AUTOMAP), that learns a near-optimal reconstruction mapping between the sensor domain data and image domain output (Fig. 1a) ) ; generating a second medical image of the second patient based on an output of the neural network (pg. 15 Fig. 1 b, An optimal mapping between sensor domain and image domain is determined via supervised learning of sensor (top) and image (bottom) domain pairs. The training process implicitly learns a robust low-dimensional joint manifold 𝒳 × 𝒴 over which the reconstruction function 𝑓 𝑥 = 𝜙 ! ∘ 𝑔 ∘ 𝜙 ! !!( 𝑥 ) is conditioned. c, AUTOMAP is implemented with a deep neural network architecture composed of fully-connected layers (FC1 to FC3) with hyperbolic tangent activations followed by a convolutional autoencoder (FC3 to Image) with rectifier nonlinearity activations (see Supplementary Methods for model architecture details) ) ; and displaying the second medical image of the second patient for clinical analysis (pg. 16 see Fig. 2 Reconstruction performance of AUTOMAP compared with conventional techniques. ) . Zhu fails to teach where Zhang teaches splitting the first medical image into a first image patch and a second image patch ( ¶44 The high quality image and the lower quality image can be divided into a set of patches ) ; applying a Fourier transform to the first image patch to transform the first image patch into a first sensor data patch ( ¶46 The deep learning model 110 may be trained using one or more training datasets comprising the MR image data. In an example, the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components. The training dataset may comprise lower quality images obtained from MR imaging devices. For example, the low quality input image can be simply obtained via inverse Fourier Transform (FT) of undersampled data (e.g., k-space data) ) ; creating a training dataset comprising the first image patch and the first sensor data patch (¶44 the image used for training (e.g., low quality and high quality images) may be divided into patches ; ¶46 The deep learning model 110 may be trained using one or more training datasets comprising the MR image data. In an example, the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components. The training dataset may comprise lower quality images obtained from MR imaging devices. For example, the low quality input image can be simply obtained via inverse Fourier Transform (FT) of undersampled data (e.g., k-space data) ). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of splitting the first medical image into a first image patch and a second image patch, applying a Fourier transform to the first image patch to transform the first image patch into a first sensor data patch, creating a training dataset comprising the first image patch and the first sensor data patch from Zhang into the method for medical imaging as disclosed by Zhu. The motivation for doing this is to improve image quality with shortened acquisition time . Regarding claim 3 , the combination of Zhu and Zhang disclose the method of claim 1, further comprising resizing the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch (Zhang ¶44 A size of an image patch may be dependent on the application such as the possible size a recognizable feature contained in the image. Alternatively, the size of an image patch may be pre-determined or based on empirical data. ) . The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 4 , the combination of Zhu and Zhang disclose the method of claim 1, further comprising adding random noise to the first sensor data patch before creating the training dataset (Zhang ¶46 synthetic noise may be added to high quality images to create noisy images ) . The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 5 , the combination of Zhu and Zhang disclose the method of claim 1, wherein training the neural network using the training dataset comprises providing the first sensor data patch as the input to the neural network and associating the first sensor data patch with the first image patch as the output of the neural network (Zhang ¶45 In some cases, one or more patches may be selected from the set of patches and used for training the model. In some instances, one or more patches corresponding to the same coordinates may be selected from a pair of images ; ¶47 In some cases, the input data may be 3D volume comprising multiple axial slices. In an example, an input and output slices may be complex-valued images of the same size and each include two channels for real and imaginary components ). The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 7 , the combination of Zhu and Zhang disclose the method of claim 1, wherein the neural network comprises a data-driven, manifold learning neural network (Zhu pg. 1 abstract AUtomated TransfOrm by Manifold APproximation (AUTOMAP), which recasts image reconstruction as a data-driven, supervised learning task that allows a mapping between sensor and image domain to emerge from an appropriate corpus of training data ). The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 8 , the combination of Zhu and Zhang disclose the method of claim 1, further comprising: applying the Fourier transform to the second image patch to transform the second image patch into a second sensor data patch ( Zhang ¶44 The high quality image and the lower quality image can be divided into a set of patches ; ¶46 The deep learning model 110 may be trained using one or more training datasets comprising the MR image data. In an example, the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components. The training dataset may comprise lower quality images obtained from MR imaging devices. For example, the low quality input image can be simply obtained via inverse Fourier Transform (FT) of undersampled data (e.g., k-space data)) ; and adding the second image patch and the second sensor data patch to the training dataset before training the neural network using the training dataset ( Zhang ¶44 the image used for training (e.g., low quality and high quality images) may be divided into patches ; ¶46 The deep learning model 110 may be trained using one or more training datasets comprising the MR image data. In an example, the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components. The training dataset may comprise lower quality images obtained from MR imaging devices. For example, the low quality input image can be simply obtained via inverse Fourier Transform (FT) of undersampled data (e.g., k-space data) ). The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 9 , the combination of Zhu and Zhang disclose the method of claim 1, further comprising: before applying the sensor data acquired from the second patient as the input to the neural network, splitting the sensor data acquired from the second patient into a third sensor data patch and a fourth sensor data patch ( Zhang ¶44 The high quality image and the lower quality image can be divided into a set of patches ; e.g. set of patches would obviously include third and fourth patches) ; wherein applying the sensor data acquired from the second patient as the input to the neural network comprises first applying the third sensor data patch as the input to the neural network and subsequently applying the fourth sensor data patch as the input to the neural network (Zhang ¶44 the image used for training (e.g., low quality and high quality images) may be divided into patches; ¶46 The deep learning model 110 may be trained using one or more training datasets comprising the MR image data. In an example, the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components. The training dataset may comprise lower quality images obtained from MR imaging devices. For example, the low quality input image can be simply obtained via inverse Fourier Transform (FT) of undersampled data (e.g., k-space data)). The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim(s) 10 (drawn to a CRM): The reje ction/proposed combination of Zhu and Zhang , explained in the rejection of method claim(s) 1 , anticipates/renders obvious the steps of the compute r readable medium of claim(s) 10 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 10 . See further Zhang ¶69 “ one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor ”. Regarding claim(s) 12 and 14 (drawn to a system ): The reje ction/proposed combination of Zhu and Zhang , explained in the rejection of method claim(s) 1 and 3 , anticipates/renders obvious the steps of the system of claim(s) 12 and 14 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 and 3 is/are equally applicable to claim(s) 12 and 14 . See further Zhang Fig. 3 display 335, sensor (¶52), processors (¶75), and CRM (¶69). Regarding claim 15 , the combination of Zhu and Zhang teach system of claim 12, wherein: the first sensor data patch comprises complex-valued magnetic resonance k-space data (Zhang ¶46 the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components ; ¶65 The image reconstruction module may take one or more k-space images or lower quality MR image data as input and output MR image data with improved quality. ) ; and the neural network comprises a data-driven, manifold learning neural network (Zhu pg. 1 abstract AUtomated TransfOrm by Manifold APproximation (AUTOMAP), which recasts image reconstruction as a data-driven, supervised learning task that allows a mapping between sensor and image domain to emerge from an appropriate corpus of training data ) . The motivation to combine the references is discussed above in the rejection for claim 1 2 . Claim(s) 2 , 6, 11, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhu and Zhang as applied to claim 1 , 10 and 12 above, and further in view of Chen et al (US 20230135995 ) . Regarding claim 2, the combination of Zhu and Zhang disclose the method of claim 1, but fail to teach where Chen teaches adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch (¶29 Selected slices may be extracted from the real MRI data with sufficient slice distance and modulations of the slices may be emulated by adding phase modulation terms to each slice ; ¶30 the under-sampled k-space data may be converted into an MRI image, for example, via Fourier transforms ). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch from Cheng into the method for medical imaging as disclosed by the combination of Zhu and Zhang . The motivation for doing this is to reconstructing magnetic resonance (MR) images based on multi-slice, under-sampled MRI data (e.g., k-space data) . Regarding claim 6 , the combination of Zhu , Zhang , and Chen disclose the method of claim 2, wherein the first sensor data patch comprises complex-valued magnetic resonance k-space data (Zhang ¶46 the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components ; ¶65 The image reconstruction module may take one or more k-space images or lower quality MR image data as input and output MR image data with improved quality. ) . The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 11, the combination of Zhu and Zhang disclose the computer-readable medium of claim 10 (as interpreted above) , the operations further comprising: resizing the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch (Zhang ¶44 A size of an image patch may be dependent on the application such as the possible size a recognizable feature contained in the image. Alternatively, the size of an image patch may be pre-determined or based on empirical data. ); and adding random noise to the first sensor data patch before creating the training dataset (Zhang ¶46 synthetic noise may be added to high quality images to create noisy images ) ; wherein the first sensor data patch comprises complex-valued magnetic resonance k-space data (Zhang ¶46 the training dataset may be 3D volume image data comprising multiple axial slices, and each slice may be complex-valued images each may include two channels for real and imaginary components ; ¶65 The image reconstruction module may take one or more k-space images or lower quality MR image data as input and output MR image data with improved quality. ) and the neural network comprises a data-driven, manifold learning neural network (Zhu pg. 1 abstract AUtomated TransfOrm by Manifold APproximation (AUTOMAP), which recasts image reconstruction as a data-driven, supervised learning task that allows a mapping between sensor and image domain to emerge from an appropriate corpus of training data ) . The combination of Zhu and Zhang fail to teach where Chen teaches adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch (¶29 Selected slices may be extracted from the real MRI data with sufficient slice distance and modulations of the slices may be emulated by adding phase modulation terms to each slice ; ¶30 the under-sampled k-space data may be converted into an MRI image, for example, via Fourier transforms ) . Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch from Chen into the method for medical imaging as disclosed by the combination of Zhu and Zhang. The motivation for doing this is to reconstructing magnetic resonance (MR) images based on multi-slice, under-sampled MRI data (e.g., k-space data). Regarding claim 13, the combination of Zhu and Zhang teach the system of claim 12, the operations further comprising: adding random noise to the first sensor data patch before creating the training dataset (Zhang ¶46 synthetic noise may be added to high quality images to create noisy images), T he combination of Zhu and Zhang fail to teach where Chen teaches adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch (¶29 Selected slices may be extracted from the real MRI data with sufficient slice distance and modulations of the slices may be emulated by adding phase modulation terms to each slice ; ¶30 the under-sampled k-space data may be converted into an MRI image, for example, via Fourier transforms ) . Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of adding synthetic phase to the first image patch before applying the Fourier transform to the first image patch to transform the first image patch into the first sensor data patch from Cheng into the method for medical imaging as disclosed by the combination of Zhu and Zhang. The motivation for doing this is to reconstructing magnetic resonance (MR) images based on multi-slice, under-sampled MRI data (e.g., k-space data). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT KEVIN KY whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-7648 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 9-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Vincent Rudolph can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-8243 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN KY/ Primary Examiner, Art Unit 2671