Prosecution Insights
Last updated: April 19, 2026
Application No. 18/907,439

MAGNETIC RESONANCE IMAGING APPARATUS AND IMAGE PROCESSING METHOD

Final Rejection §102§103
Filed
Oct 04, 2024
Examiner
ROBINSON, NICHOLAS A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Fujifilm Corporation
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
64 granted / 131 resolved
-21.1% vs TC avg
Strong +55% interview lift
Without
With
+54.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
51 currently pending
Career history
182
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
30.6%
-9.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§102 §103
DETAILED ACTION This Office action is responsive to communications filed on 01/13/2026. Claims 1-10, 12-13 have been amended. Presently, Claims 1-13 remain pending and are hereinafter examined on the merits. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 10/06/2023. It is noted, however, that applicant has not filed a certified copy of the JP2023-174423 application as required by 37 CFR 1.55. Further note; Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Response to Arguments Previous objections to the Abstract are withdrawn in view of the amendments filed on 01/13/2026. Previous interpretations under 35 USC § 112(f) are not withdrawn in view of the amendments filed on 01/13/2026. Previous rejections under 35 USC § 112(b) are withdrawn in view of the amendments filed on 01/13/2026. Claim objections directed to Claim 1 and Claim 9 are withdrawn in view of the amendments filed on 01/13/2026. Previous claim objections to Claim 10 and Claim 13 are NOT withdrawn in view of the amendments filed on 01/13/2026. Applicant’s remarks merely asserts transversal and request withdraw of the objections, but fails to substantively address each outstanding deficiencies identified in the Office Action. Accordingly, the following claim objects are maintained. “Claim 10: “convolutional neural networks (CNNs)”-line 7. Appropriate correction is required. Claim 13: “convolutional neural network (CNN)”-line 7. Appropriate correction is required.” Previous rejections under 35 USC § 101 are withdrawn in view of the amendments filed on 01/13/2026. The Applicant’s arguments with respect to rejections under 35 USC § 102 & 35 USC § 103 have been fully, considered, but are not persuasive. The Applicants arguments rely on asserted distinctions between training data and inputs/processing data in Zhang. However, this distinction is not supported by the claims as recited. The claims do not recite any limitations restricting the resizing, truncation, zero-padding, or k-space manipulation step to input data as opposed to training data. The claim language is functional and structural reciting reconstruction CNN ringing correction, Fourier’s transform, k-space replacement, and inverse Fourier’s transform. The claim lacks limitations that excludes learning phases or limits the disclosed operations at specific stages/steps. Accordingly, the Applicant’s arguments directed to learning data versus input data are not commensurate in scope with the claims. The Applicant further argues that Zhang merely relates to a sampling pattern of filling the k-space and not to size of the k-space. This argument mischaracterizes Zhang’s teachings. Zhang explicitly attributes Gibbs ringing to insufficient sampling of high frequency data in k-space data and repeatedly discloses truncation of high-frequency components in the k-space domain. Truncation of high-frequency components is a reduction in k-space (i.e., a size limitations of measured k-space data relative to the full reconstruction domain. Thus Zhang is not merely address sampling patterns, but explicitly teaches reduced k-space coverage and truncated k-space representations, which directly corresponds to a measurement matrix of the measured data that is smaller than the reconstruction matrix. The Applicant’s position that Zhang does not disclose “reconstructing the measurement data at a reconstruction matrix size [...] where a size of the measurement matrix is less than the reconstruction matrix size” is not persuasive. Zhang teaches (i) truncation of k-space data, (ii) zero-padding interpolation to enlarge image matrix size, and (iii) reconstruction of enlarged image with ringing mitigation. Zhang, specifically discloses: Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. Zhang explicitly mentions the use of “Zero-Padding interpolation”, which increases image matrix to created enlarged images. Hence, this process does indeed correspond to reconstructing measurement data at a size larger than the collected data, “Finally, the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. This processes effectively reconstructs the measurement data at a reconstruction size (i.e., the enlarged size) that is indeed larger than the measurement matrix size (i.e., the measured data). Although Zhang teaches: see “an original size of 320 × 320 and then resized to 256 × 256 images by cropping and zero-padding.”, pg. 2135 [2.1.1 Dataset]. This is distinct from the actual ringing correction process or the relationship between measurement and construction matrices. The 320 x 320 to 256 x 256 step is for training preparation [emphasis added]. Zhang describes extracting images with an original size of 320 x 320 and resizing them to 256 x 256 by “cropping an zero-padding”. However, this step is strictly to generate a standardized set of “artifact free-images for training” the neural network. These 256 x 256 images effectively serving as a ground truth or references image, not the measured data with artifacts. Zhang teaches the machine learning method, “the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. The estimating of the artifact uses a convolutional neural network, “This study aims to develop a CNN-based method for Gibbs-ringing artifact reduction in MR images. We use a CNN that learns an end-to-end mapping to extract artifact information from an original image. The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data. For brevity, we name the proposed method as Gibbs-ringing artifact reduction using CNN (GRA-CNN). The performance of GRA-CNN was evaluated on in vivo brain MRI data. A preliminary report of this study was published in a conference proceeding”- pg. 2134 [Introduction], “The ringing artifact was extracted from the original image using a deep convolutional neural network and then subtracted from the original image to obtain the artifact-free image.”, pg. 2133 [Abstract]. Hence, the Applicant’s assertion that to limit cropping/zero-padding disclosure solely to training data does not negate the disclosures of Zhang directed to reconstruction and enlarged image matrices. Additionally, the Applicant’s argument that Zhang is silent about matrix size used in the experiments is not relevant to the claim limitations. The claims require relative relationship between matrix sizes and functional operations, without specifics of any numeric. As such, Zhang’s disclosures of truncation, zero-padding, enlarged image reconstruction, and k-space replacement satisfy these relational and functional limitations. Importantly, the Applicant’s argument that “Zhang does not disclose any such processing of input data”-pg. 13 is contradicted by Zhang’s explicitly disclose of operation processing steps of CNN based artifact estimation, subjection from the original image, k-space replacement, and generation of a final image. These are not merely experimental descriptions; they are the core operational steps of the disclosed method of Zhang. For at least these reasons above, Zhang discloses the claim elements to render obvious of the amended limitations relied upon by Zhang. Accordingly, the rejections are maintained. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: The nonce term “unit” in the phrase “imaging unit that collects…” is used in claim(s) 1 & 9, for collecting measurement data invokes 35 USC 112(f) The term, “unit” is a non-structural generic placeholder that does not include any specific structure for performing the accompany functions. See MPEP 2181.I.A: The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6: "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Massachusetts Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). Please note that for the purposes of this examination the phrase “imaging unit” is being interpreted to include generic MRI apparatus as described in ¶0039, ‘Since the configuration of the imaging unit 10 is the same as a configuration of a normal MRI apparatus’ in the specification as performing the claimed function, and equivalents thereof. Claim Objections The following claims are objected to because of the following informalities and should recite: Claim 10: “convolutional neural networks (CNNs)”-line 7. Appropriate correction is required. Claim 13: “convolutional neural network (CNN)”-line 7. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 5-8 and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang Q et al ( MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks. Magn Reson Med. 2019 ( a copy of which was made of record on 10/04/2024). Claim 1: Zhang discloses, A magnetic resonance imaging apparatus comprising: (Title, ‘MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks’) an imaging unit that collects measurement data consisting of nuclear magnetic resonance signals, the measurement data being collected using a measurement matrix; and -Zhang focuses on reducing Gibbs-ringing artifact in MRI images, pg. 2133-section (purpose, theory and methods, results, conclusion). This artifact is fundamentally caused by “insufficient sampling of high-frequency data in the k-space domain” during MRI data acquisition, pg. 2133-Introduction right col. While the CNN’s input is an image, pg. 2134-Methods right col, ‘artifact‐free image from image X. The GRA‐CNN algorithm uses a CNN to learn an end‐to‐end mapping F, which inputs an original image with ringing artifact and outputs an artifact map. For simplicity, we call X the original image and output F(X) the artifact map’, the methods final step involves enforcing data fidelity by using “measured data” in the k-space domain, pg. 2134-Introduction left col ¶3, ‘The artifact‐free image is then obtained by subtracting the CNN‐estimated artifact from the original image and replacing the low‐frequency k‐space part with the measured data.’, pg. 2135-Method left col ¶2, ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’; see also pg. 2133-section (purpose, theory and methods, results, conclusion). This training data also involves creating images with ringing artifact by “truncating high-frequency components in the k-space domain”, pg. 2134-1st para of 2.1.2 Training right col, which implies the manipulation of the acquired k-space data. -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. one or more processors configured to perform a function of reconstructing the measurement data at a reconstruction matrix size of a reconstruction matrix and a function of correcting ringing which occurs in a reconstructed image in a case in which a size of the measurement matrix is less than the reconstruction matrix size of the reconstruction matrix to generate a ringing-corrected image, -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. -Zhang explicitly mentions the use of “Zero-Padding interpolation”, which increases image matrix to created enlarged images. Hence, this process does indeed correspond to reconstructing measurement data at a size larger than the collected data, “Finally, the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. This processes effectively reconstructs the measurement data at a reconstruction size (i.e., the enlarged size) that is indeed larger than the measurement matrix size (i.e., the measured data). -Although Zhang teaches: see “an original size of 320 × 320 and then resized to 256 × 256 images by cropping and zero-padding.”, pg. 2135 [2.1.1 Dataset]. This is distinct from the actual ringing correction process or the relationship between measurement and construction matrices. The 320 x 320 to 256 x 256 step is for training preparation [emphasis added]. Zhang describes extracting images with an original size of 320 x 320 and resizing them to 256 x 256 by “cropping an zero-padding”. However, this step is strictly to generate a standardized set of “artifact free-images for training” the neural network. These 256 x 256 images effectively serving as a ground truth or references image, not the measured data with artifacts. -Zhang teaches the machine learning method, “the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. The estimating of the artifact uses a convolutional neural network, “This study aims to develop a CNN-based method for Gibbs-ringing artifact reduction in MR images. We use a CNN that learns an end-to-end mapping to extract artifact information from an original image. The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data. For brevity, we name the proposed method as Gibbs-ringing artifact reduction using CNN (GRA-CNN). The performance of GRA-CNN was evaluated on in vivo brain MRI data. A preliminary report of this study was published in a conference proceeding”- pg. 2134 [Introduction], “The ringing artifact was extracted from the original image using a deep convolutional neural network and then subtracted from the original image to obtain the artifact-free image.”, pg. 2133 [Abstract]. wherein the one or more processors are configured to perform the function of reconstructing the measurement data using a Convolutional Neural Network (CNN) trained to generate an image in which ringing of an input image is corrected, and -The GRA-CNN algorithm utilizes a deep convolutional neural network (CNN). This network is a “4-layered convolutional network” trained to learn an end-to-end mapping, pg. 2134-Introduction left col ¶3, pg. 2134-2.1.1 | Formulation ¶1. The CNN is trained to extract artifact information (an artifact map) from an original image with ringing artifact, pg. 2133-section (purpose, theory and methods, results, conclusion), pg. 2134-Methods right col. This artifact map is then subsequently used to correct the ringing in the input image, “The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data.”-pg. 2134-Introduction left col ¶3, as also ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’. pg. 2135-2.1.2 | Training left col. ¶1-2. and the one or more processors are configured to generate the ringing-corrected image by generating composite data of the ringing-corrected image by replacing a part of k-space data of an image obtained by performing a Fourier transform on an output image of the CNN with a portion of the measurement data before input to the CNN and -After the CNN-estimated artifact is subtracted from the original image to produce an intermediate “ringing-removed image”, the method then “replace[s] the low-frequency k-space part with the measured data”, pg. 2134-Introduction left col ¶3, or “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image’, pg. 2135-2.1.2 | Training left col. ¶1-2. This replacement implicitly requires that a Fourier transform is performed in the context of the method on the intermediate artifact-reduced image to obtain its k-space representation, before the low-frequency components are swapped with measured k-space data. generating the ringing-corrected image by performing an inverse Fourier transform on the composite data. - “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image”, pg. 2135-2.1.2 | Training left col. ¶1-2. This process would require an inverse transform to convert the modified k-space data (composite data) into a final image. Claim 3: Zhang discloses all the elements above in claim 1, Zhang discloses, wherein the CNN includes a trained model that has been trained by using, as training data, a correct answer image in which ringing has not occurred and an image obtained by performing an inverse Fourier transform on k-space data in which a size of a measurement matrix region of a training image is smaller than a matrix size of the correct answer image. -Zhang teaches “17,532 T2W brain images were obtained as the artifact-free images for training”-pg. 2135-Experiements | 2.2.1 Datasets right col ¶1. These images serve as the “correct answer images” because they are without noticeable ringing artifacts, pg. 2134-2.1.2 | Training right col ¶1. Zhang also refers to the “correct answer image” as “reference image”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. -From these artifact-free images (i.e., the correct answer images), “corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain” - pg. 2134-2.1.2 | Training right col ¶1-Full quote: “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.” This process directly creates input images for the CNN. More specifically, the “artifact image was synthesized by discarding a certain percentage of high-frequency data after performing Fourier transform on the reference image.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. Discarding high-frequency data in k-space is precisely what leads to Gibbs-ringing artifact due to “insufficient sampling of high-frequency data in the k-space domain” -pg. 2133-Introduction right col. The method involves truncating high-frequency components in the K-space domain. This results in a measurement matrix region that is effectively smaller than a full matrix size of the correct answer image. Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain. The reference artifact map is then obtained as the difference between images from data with and without truncation.”-pg. 2134 [2.1.2 Training]. At this step, generating an image from modified k-space data would require an inverse Fourier transform to return to the image domain. The CNN includes the trained model learned to minimize the loss between the network output and the difference between the images from the data with and without truncation. Claim 5: Zhang discloses all the elements above in claim 1, Zhang discloses, wherein the CNN includes a trained model that has been trained to obtain a ringing correction effect for an input image in which a ratio between the size of the measurement matrix and the reconstruction matrix size is a predetermined value. -Zhang teaches a GRA-CNN model was trained and validated using data with a single under sampling of 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“One was trained and validated using data with a single undersampling ratio of 50%.”. This means the high-frequency k-space data was truncated to a predetermined level, affecting the “measurement matrix size” relative to the full reconstruction size. -In another model, the GRA-CNN-m, was trained an validated using mixed data with varying undersampling ratios from 30% to 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“The other was trained and validated using mixed data with varying undersampling ratios from 30% to 50%.”. While varying, these were specific predetermined ratios (30%, 40%, 50%) used in the training data, (i.e., not a continuous changing ratio) during a single training instance, FIG. 8, pg 2139 3.5 | Brain images with varying undersampling levels. -The ringing artifact in the input images is caused by “discarding a certain percentage of high-frequency data after performing Fourier transform on the reference image.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1, which directly relates to the measurement matrix size being smaller than the full reconstruction matrix size. Claim 6: Zhang discloses all the elements above in claim 5, Zhang discloses, wherein the CNN includes a plurality of CNNs that have been trained to obtain a ringing correction effect for input images having different matrix ratios, and -Zhang teaches a GRA-CNN model was trained and validated using data with a single under sampling of 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“One was trained and validated using data with a single undersampling ratio of 50%.”. This means the high-frequency k-space data was truncated to a predetermined level, affecting the “measurement matrix size” relative to the full reconstruction size. -In another model, the GRA-CNN-m, was trained an validated using mixed data with varying undersampling ratios from 30% to 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“The other was trained and validated using mixed data with varying undersampling ratios from 30% to 50%.”. While varying, these were specific predetermined ratios (30%, 40%, 50%) used in the training data, (i.e., not a continuous changing ratio) during a single training instance, FIG. 8, pg 2139 3.5 | Brain images with varying undersampling levels. the one or more processors are configured to select at least one of the plurality of CNNs to perform the function of correcting the ringing. -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -For the test cases involving varying undersampling levels (30%, 40%, 50%), the GRA-CNN-m model was applied, which demonstrates removing ringing consistently across these levels; however, single identical undersampling level was slightly higher in PSNR reduction, “the real undersampling level, i.e., the appearance of Gibbs artifact in MR images, may not be accurately obtained. To address this problem, we trained and validated a GRA-CNN-m model by using mixed data with multiple undersampling levels. This model could consistently remove ringing artifact in MR images under varying sampling levels from 30% to 50%. However, it resulted in a slight PSNR reduction compared with the model trained using the data of single identical undersampling level.”-pg.2142 | Discussion left col ¶2. This indicates a decision process, where depending on the undersampling level of the image, an appropriate, pre-trained CNN model is chosen for correction. Claim 7: Zhang discloses all the elements above in claim 1, Zhang discloses, wherein the CNN includes a trained model that has been trained to obtain a ringing correction effect for at least a one-dimensional direction of the input image. -Zhang teaches the performance of the GRA-CNN was evaluated on both “1- and 2-dimensional Gibbs-ringing artifact.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. The training process for GRA-NN specifically included “1-dimensional truncated data” -pg.2142 | Discussion right col ¶2. Images with ringing artifact were generated by ““artifact image was synthesized by discarding a certain percentage of high-frequency data after performing Fourier transform on the reference image.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1., and examples are shown using “50% truncation in 1 dimension (the top 2 rows) and in 2 dimensions (the bottom 2 rows).”, FIG. 3. Claim 8: Zhang discloses all the elements above in claim 1, Zhang discloses, wherein the CNN includes a plurality of CNNs that have been trained to obtain a ringing correction effect for input images having different sampling patterns, for each sampling pattern of the measurement data, and -Zhang teaches a GRA-CNN model was trained and validated using data with a single under sampling of 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“One was trained and validated using data with a single undersampling ratio of 50%.”. This means the high-frequency k-space data was truncated to a predetermined level, affecting the “measurement matrix size” relative to the full reconstruction size. -In another model, the GRA-CNN-m, was trained an validated using mixed data with varying undersampling ratios from 30% to 50%, pg. 2136- 2.2.3| Performance evaluation, left col ¶1.-“The other was trained and validated using mixed data with varying undersampling ratios from 30% to 50%.”. While varying, these were specific predetermined ratios (30%, 40%, 50%) used in the training data, (i.e., not a continuous changing ratio) during a single training instance, FIG. 8, pg 2139 3.5 | Brain images with varying undersampling levels. the one or more processors are configured to select at least one CNN of the plurality of CNNs to perform the function of correcting the ringing. -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -For the test cases involving varying undersampling levels (30%, 40%, 50%), the GRA-CNN-m model was applied, which demonstrates removing ringing consistently across these levels; however, single identical undersampling level was slightly higher in PSNR reduction, “the real undersampling level, i.e., the appearance of Gibbs artifact in MR images, may not be accurately obtained. To address this problem, we trained and validated a GRA-CNN-m model by using mixed data with multiple undersampling levels. This model could consistently remove ringing artifact in MR images under varying sampling levels from 30% to 50%. However, it resulted in a slight PSNR reduction compared with the model trained using the data of single identical undersampling level.”-pg.2142 | Discussion left col ¶2. This indicates a decision process, where depending on the undersampling level of the image, an appropriate, pre-trained CNN model is chosen for correction. Claim 10: Zhang discloses, An image processing method of, (Title, ‘MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks’) -Zhang focuses on reducing Gibbs-ringing artifact in MRI images, pg. 2133-section (purpose, theory and methods, results, conclusion). This artifact is fundamentally caused by “insufficient sampling of high-frequency data in the k-space domain” during MRI data acquisition, pg. 2133-Introduction right col. While the CNN’s input is an image, pg. 2134-Methods right col, ‘artifact‐free image from image X. The GRA‐CNN algorithm uses a CNN to learn an end‐to‐end mapping F, which inputs an original image with ringing artifact and outputs an artifact map. For simplicity, we call X the original image and output F(X) the artifact map’, the methods final step involves enforcing data fidelity by using “measured data” in the k-space domain, pg. 2134-Introduction left col ¶3, ‘The artifact‐free image is then obtained by subtracting the CNN‐estimated artifact from the original image and replacing the low‐frequency k‐space part with the measured data.’, pg. 2135-Method left col ¶2, ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’; see also pg. 2133-section (purpose, theory and methods, results, conclusion). This training data also involves creating images with ringing artifact by “truncating high-frequency components in the k-space domain”, pg. 2134-1st para of 2.1.2 Training right col, which implies the manipulation of the acquired k-space data. correcting ringing in measurement data consisting of nuclear magnetic resonance signals collected by a magnetic resonance imaging apparatus, the image processing method comprising: -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -Zhang discloses that images were “resized to 256 × 256 images by cropping and zero-padding” from “an original size of 320 × 320”, pg. 2135-2.2 Experiments/2.2.1 Datasets left col. ¶1. Additionally, the GRA-CNN method “can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143-Desicussions left col. ¶1. This indicates the handling of different image matrix sizes. a step of applying a CNN trained to generate an output image in which the ringing of an input image is corrected, to an input of a measurement image of the measurement data collected -Zhang teaches the GRA-CNN algorithm utilizes a deep convolutional neural network (CNN). This network is a “4-layered convolutional network” trained to learn an end-to-end mapping, pg. 2134-Introduction left col ¶3, pg. 2134-2.1.1 | Formulation ¶1. The CNN is trained to extract artifact information (an artifact map) from an original image with ringing artifact, pg. 2133-section (purpose, theory and methods, results, conclusion), pg. 2134-Methods right col. This artifact map is then subsequently used to correct the ringing in the input image, “The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data.”-pg. 2134-Introduction left col ¶3, as also ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’. pg. 2135-2.1.2 | Training left col. ¶1-2. using a measurement matrix that has been reconstructed as a reconstructed image at a reconstruction matrix size of a reconstruction matrix in a case in which a size of the measurement matrix is less than the reconstruction matrix size of the reconstruction matrix; -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. -Zhang explicitly mentions the use of “Zero-Padding interpolation”, which increases image matrix to created enlarged images. Hence, this process does indeed correspond to reconstructing measurement data at a size larger than the collected data, “Finally, the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. This processes effectively reconstructs the measurement data at a reconstruction size (i.e., the enlarged size) that is indeed larger than the measurement matrix size (i.e., the measured data). -Although Zhang teaches: see “an original size of 320 × 320 and then resized to 256 × 256 images by cropping and zero-padding.”, pg. 2135 [2.1.1 Dataset]. This is distinct from the actual ringing correction process or the relationship between measurement and construction matrices. The 320 x 320 to 256 x 256 step is for training preparation [emphasis added]. Zhang describes extracting images with an original size of 320 x 320 and resizing them to 256 x 256 by “cropping an zero-padding”. However, this step is strictly to generate a standardized set of “artifact free-images for training” the neural network. These 256 x 256 images effectively serving as a ground truth or references image, not the measured data with artifacts. -Zhang teaches the machine learning method, “the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. The estimating of the artifact uses a convolutional neural network, “This study aims to develop a CNN-based method for Gibbs-ringing artifact reduction in MR images. We use a CNN that learns an end-to-end mapping to extract artifact information from an original image. The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data. For brevity, we name the proposed method as Gibbs-ringing artifact reduction using CNN (GRA-CNN). The performance of GRA-CNN was evaluated on in vivo brain MRI data. A preliminary report of this study was published in a conference proceeding”- pg. 2134 [Introduction], “The ringing artifact was extracted from the original image using a deep convolutional neural network and then subtracted from the original image to obtain the artifact-free image.”, pg. 2133 [Abstract]. a step of generating k-space data by performing a Fourier transform on the output image; a step of generating composite k-space data by replacing a part of the k-space data of the output image with a portion of the measurement data before input to the CNN; -After the CNN-estimated artifact is subtracted from the original image to produce an intermediate “ringing-removed image”, the method then “replace[s] the low-frequency k-space part with the measured data”, pg. 2134-Introduction left col ¶3, or “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image’, pg. 2135-2.1.2 | Training left col. ¶1-2. This replacement implicitly requires that a Fourier transform is performed in the context of the method on the intermediate artifact-reduced image to obtain its k-space representation, before the low-frequency components are swapped with measured k-space data. a step of performing an inverse Fourier transform on the composite k-space data to generate a ringing-corrected image in which the ringing in the reconstructed image is corrected. - “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image”, pg. 2135-2.1.2 | Training left col. ¶1-2.. This process would require an inverse transform to convert the modified k-space data (composite data) into a final image. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang Q et al ( MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks. Magn Reson Med. 2019), as applied to claim 1 and 10, in further view of Nielsen et al (US 2024/0074671 A1). Claim 2: Zhang discloses all the elements above in claim 1, Zhang fails to disclose: wherein the one or more processors are configured to repeat the function of correcting the ringing via the CNN and generating the composite data at least twice. However, Nielsen the context of Fourier-induced Gibbs ringing artifacts from MRI images discloses, wherein the one or more processors are configured to repeat the function of correcting the ringing via the CNN and generating the composite data at least twice. (¶0234, ‘The correcting unit may be further configured to iteratively improve the accuracy of the estimated local amplitudes and the quality of the ringing correction by re-estimating the local amplitudes of ringing artifacts and performing a ringing correction twice or multiple times.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the one or more processors of modified Zhang to be configured repeat the function of correcting the ringing via the CNN and generating the composite data at least twice as taught by Novikov. The motivation to do this yields predictable results such as improving the quality of the ringing correction, ¶0234 of Nielsen. Claim 11: Zhang discloses all the elements above in claim 10, Zhang fails to disclose: wherein the step of applying the CNN, the step of generating the k-space data, and the step of generating the composite k-space data are repeated two or more times. However, Nielsen the context of Fourier-induced Gibbs ringing artifacts from MRI images discloses, wherein the step of applying the CNN, the step of generating the k-space data, and the step of generating the composite k-space data are repeated two or more times. (¶0234, ‘The correcting unit may be further configured to iteratively improve the accuracy of the estimated local amplitudes and the quality of the ringing correction by re-estimating the local amplitudes of ringing artifacts and performing a ringing correction twice or multiple times.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the one or more processors of modified Zhang to be configured to ringing correction processing via the CNN and combining processing for generating composite data at least twice as taught by Novikov. The motivation to do this yields predictable results such as improving the quality of the ringing correction, ¶0234 of Nielsen. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang Q et al ( MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks. Magn Reson Med. 2019), as applied to claim 3 and 10, in further view of Lazarus et al (US 2020/0058106 Claim 4: Zhang discloses all the elements above in claim 3, Zhang fails to disclose: wherein the trained model has been trained by increasing a weight of an error in the measurement matrix region of the training image to be larger than a weight of an error in a region other than the measurement matrix region of the training image in a loss evaluation function during training. However, Lazarus in the context of removing artifacts in MR data using CNN discloses, wherein the trained model has been trained by increasing a weight of an error in the measurement matrix region of the training image to be larger than a weight of an error in a region other than the measurement matrix region of the training image in a loss evaluation function during training. -L2 weighted loss between output and target data as one of the loss function that may be employed for training the CNN, ¶0041, ‘generating an MR image from input MR data at least in part by using a neural network model (e.g., a model comprising one or more convolutional layers) to suppress at least one artefact in the input MR data’; ¶[0125-0132, ‘In some embodiments, one or a linear combination of multiple loss functions may be employed to train the neural network models described herein: [0126] L2 loss between output and target data [0127] L1 loss between output and target data [0128] L2 weighted loss between output and target data. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (the farther from the center of k-space), the higher the weight. Using such weights causes the resulting model to keep the high spatial frequencies which are noisier than the low frequencies [0129] L1 weighted regularization on the output. A sparse prior may be enforce on the output of the neural network by using the l.sub.1 norm, optionally after weighting. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (far from the center of k-space), the smaller the weight. This encourages sparsity. [0130] Generative Adversarial Nets loss [0131] Structured similarity index loss [0132] Any of the other loss functions described herein including in connection with FIGS. 1A-1E and 2A-2B.’ -The k-space coordinates used for applying these weights amounts to the measurement matrix region. K-space is the spatial frequency domain where the MR signals are acquired, which represent the raw measurements. Therefore, increasing the weight of errors in regions of higher spatial frequency (i.e., father from the center of k-space) during training, as it differentially weights errors based on their location within the measurement domain (k-space). This means errors in the outer, high frequency parts of k-space are given more important than errors in the inner, lower-frequency parts. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the CNN of modified Zhang to include the teachings of Lazarus for the advantage of providing and improved apparatus for causing the model to keep the high frequencies which are noisier than the low frequencies, for capturing and preserving find anatomical details and sharp edges in the images, as suggested by Lazarus, ¶0100, ¶0123-0132. Claim 12: Zhang discloses all the elements above in claim 10, Zhang fails to disclose: further comprising: a step of training the CNN using, as training data, a correct answer image in which ringing has not occurred and an image obtained by performing an inverse Fourier transform on k- space data in which a size of a measurement matrix region of a training image is smaller than a matrix size of the correct answer image, -Zhang teaches “17,532 T2W brain images were obtained as the artifact-free images for training”-pg. 2135-Experiements | 2.2.1 Datasets right col ¶1. These images serve as the “correct answer images” because they are without noticeable ringing artifacts, pg. 2134-2.1.2 | Training right col ¶1. Zhang also refers to the “correct answer image” as “reference image”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. -From these artifact-free images (i.e., the correct answer images), “corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain” - pg. 2134-2.1.2 | Training right col ¶1-Full quote: “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.” This process directly creates input images for the CNN. More specifically, the “artifact image was synthesized by discarding a certain percentage of high-frequency data after performing Fourier transform on the reference image.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. Discarding high-frequency data in k-space is precisely what leads to Gibbs-ringing artifact due to “insufficient sampling of high-frequency data in the k-space domain” -pg. 2133-Introduction right col. The method involves truncating high-frequency components in the K-space domain. This results in a measurement matrix region that is effectively smaller than a full matrix size of the correct answer image. Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain. The reference artifact map is then obtained as the difference between images from data with and without truncation.”-pg. 2134 [2.1.2 Training]. At this step, generating an image from modified k-space data would require an inverse Fourier transform to return to the image domain. The CNN includes the trained model learned to minimize the loss between the network output and the difference between the images from the data with and without truncation. Zhang fails to disclose: wherein, in the step of training the CNN, a weight of an error in the measurement matrix region is increased in a loss evaluation function during training. However, Lazarus in the context of removing artifacts in MR data using CNN discloses, wherein, in the step of training the CNN, a weight of an error in the measurement matrix region is increased in a loss evaluation function during training. -L2 weighted loss between output and target data as one of the loss function that may be employed for training the CNN, ¶0041, ‘generating an MR image from input MR data at least in part by using a neural network model (e.g., a model comprising one or more convolutional layers) to suppress at least one artefact in the input MR data’; ¶[0125-0132, ‘In some embodiments, one or a linear combination of multiple loss functions may be employed to train the neural network models described herein: [0126] L2 loss between output and target data [0127] L1 loss between output and target data [0128] L2 weighted loss between output and target data. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (the farther from the center of k-space), the higher the weight. Using such weights causes the resulting model to keep the high spatial frequencies which are noisier than the low frequencies [0129] L1 weighted regularization on the output. A sparse prior may be enforce on the output of the neural network by using the l.sub.1 norm, optionally after weighting. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (far from the center of k-space), the smaller the weight. This encourages sparsity. [0130] Generative Adversarial Nets loss [0131] Structured similarity index loss [0132] Any of the other loss functions described herein including in connection with FIGS. 1A-1E and 2A-2B.’ -The k-space coordinates used for applying these weights amounts to the measurement matrix region. K-space is the spatial frequency domain where the MR signals are acquired, which represent the raw measurements. Therefore, increasing the weight of errors in regions of higher spatial frequency (i.e., father from the center of k-space) during training, as it differentially weights errors based on their location within the measurement domain (k-space). This means errors in the outer, high frequency parts of k-space are given more important than errors in the inner, lower-frequency parts. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the CNN of modified Zhang to include the teachings of Lazarus for the advantage of providing and improved apparatus for causing the model to keep the high frequencies which are noisier than the low frequencies, for capturing and preserving find anatomical details and sharp edges in the images, as suggested by Lazarus, ¶0100, ¶0123-0132. Claims 9 & 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang Q et al ( MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks. Magn Reson Med. 2019) in view of Lazarus et al (US 2020/0058106 A1). Claim 9: Zhang discloses, A magnetic resonance imaging apparatus comprising: (Title, ‘MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks’) an imaging unit that collects measurement data consisting of nuclear magnetic resonance signals, the measurement data being collected using a measurement matrix; and -Zhang focuses on reducing Gibbs-ringing artifact in MRI images, pg. 2133-section (purpose, theory and methods, results, conclusion). This artifact is fundamentally caused by “insufficient sampling of high-frequency data in the k-space domain” during MRI data acquisition, pg. 2133-Introduction right col. While the CNN’s input is an image, pg. 2134-Methods right col, ‘artifact‐free image from image X. The GRA‐CNN algorithm uses a CNN to learn an end‐to‐end mapping F, which inputs an original image with ringing artifact and outputs an artifact map. For simplicity, we call X the original image and output F(X) the artifact map’, the methods final step involves enforcing data fidelity by using “measured data” in the k-space domain, pg. 2134-Introduction left col ¶3, ‘The artifact‐free image is then obtained by subtracting the CNN‐estimated artifact from the original image and replacing the low‐frequency k‐space part with the measured data.’, pg. 2135-Method left col ¶2, ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’; see also pg. 2133-section (purpose, theory and methods, results, conclusion). This training data also involves creating images with ringing artifact by “truncating high-frequency components in the k-space domain”, pg. 2134-1st para of 2.1.2 Training right col, which implies the manipulation of the acquired k-space data. -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. one or more processors configured to perform a function of reconstructing the measurement data at a reconstruction matrix size of a reconstruction matrix and a function of correcting ringing which occurs in a reconstructed image in a case in which a size of measurement matrix is less than the reconstruction matrix size of the reconstruction matrix to generate a ringing-corrected image, -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. -Zhang explicitly mentions the use of “Zero-Padding interpolation”, which increases image matrix to created enlarged images. Hence, this process does indeed correspond to reconstructing measurement data at a size larger than the collected data, “Finally, the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. This processes effectively reconstructs the measurement data at a reconstruction size (i.e., the enlarged size) that is indeed larger than the measurement matrix size (i.e., the measured data). -Although Zhang teaches: see “an original size of 320 × 320 and then resized to 256 × 256 images by cropping and zero-padding.”, pg. 2135 [2.1.1 Dataset]. This is distinct from the actual ringing correction process or the relationship between measurement and construction matrices. The 320 x 320 to 256 x 256 step is for training preparation [emphasis added]. Zhang describes extracting images with an original size of 320 x 320 and resizing them to 256 x 256 by “cropping an zero-padding”. However, this step is strictly to generate a standardized set of “artifact free-images for training” the neural network. These 256 x 256 images effectively serving as a ground truth or references image, not the measured data with artifacts. -Zhang teaches the machine learning method, “the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. The estimating of the artifact uses a convolutional neural network, “This study aims to develop a CNN-based method for Gibbs-ringing artifact reduction in MR images. We use a CNN that learns an end-to-end mapping to extract artifact information from an original image. The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data. For brevity, we name the proposed method as Gibbs-ringing artifact reduction using CNN (GRA-CNN). The performance of GRA-CNN was evaluated on in vivo brain MRI data. A preliminary report of this study was published in a conference proceeding”- pg. 2134 [Introduction], “The ringing artifact was extracted from the original image using a deep convolutional neural network and then subtracted from the original image to obtain the artifact-free image.”, pg. 2133 [Abstract]. wherein the one or more processors are configured to perform the function of reconstructing the measurement data using a Convolutional Neural Network (CNN) trained to generate an image in which ringing of an input image is corrected and generate a final image based on the ringing-corrected image in which ringing in the reconstructed image is corrected by the CNN and the measurement data, and -The GRA-CNN algorithm utilizes a deep convolutional neural network (CNN). This network is a “4-layered convolutional network” trained to learn an end-to-end mapping, pg. 2134-Introduction left col ¶3, pg. 2134-2.1.1 | Formulation ¶1. The CNN is trained to extract artifact information (an artifact map) from an original image with ringing artifact, pg. 2133-section (purpose, theory and methods, results, conclusion), pg. 2134-Methods right col. This artifact map is then subsequently used to correct the ringing in the input image, “The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data.”-pg. 2134-Introduction left col ¶3, as also ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’. pg. 2135-2.1.2 | Training left col. ¶1-2. the CNN has been trained by using, as training data, a correct answer image in which ringing has not occurred and an image obtained performing an inverse Fourier transform on k-space data in which a size of a measurement matrix region is smaller than a matrix size of the correct answer image, and -Zhang teaches “17,532 T2W brain images were obtained as the artifact-free images for training”-pg. 2135-Experiements | 2.2.1 Datasets right col ¶1. These images serve as the “correct answer images” because they are without noticeable ringing artifacts, pg. 2134-2.1.2 | Training right col ¶1. Zhang also refers to the “correct answer image” as “reference image”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. -From these artifact-free images (i.e., the correct answer images), “corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain” - pg. 2134-2.1.2 | Training right col ¶1-Full quote: “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.” This process directly creates input images for the CNN. More specifically, the “artifact image was synthesized by discarding a certain percentage of high-frequency data after performing Fourier transform on the reference image.”, pg. 2136- 2.2.3| Performance evaluation, left col ¶1. Discarding high-frequency data in k-space is precisely what leads to Gibbs-ringing artifact due to “insufficient sampling of high-frequency data in the k-space domain” -pg. 2133-Introduction right col. This effectively makes the extend of the k-space data used for reconstruction (the ‘measurement matrix region”) smaller in terms of frequency content. At this step, generating an image from modified k-space data would require an inverse Fourier transform to return to the image domain. Zhang fails to disclose: increasing a weight of an error in the measurement matrix region that to be larger than a weight of an error in a region other than the measurement matrix region in a loss evaluation function during training. However, Lazarus in the context of removing artifacts in MR data using CNN discloses, increasing a weight of an error in the measurement matrix region that to be larger than a weight of an error in a region other than the measurement matrix region in a loss evaluation function during training. -L2 weighted loss between output and target data as one of the loss function that may be employed for training the CNN, ¶0041, ‘generating an MR image from input MR data at least in part by using a neural network model (e.g., a model comprising one or more convolutional layers) to suppress at least one artefact in the input MR data’; ¶[0125-0132, ‘In some embodiments, one or a linear combination of multiple loss functions may be employed to train the neural network models described herein: [0126] L2 loss between output and target data [0127] L1 loss between output and target data [0128] L2 weighted loss between output and target data. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (the farther from the center of k-space), the higher the weight. Using such weights causes the resulting model to keep the high spatial frequencies which are noisier than the low frequencies [0129] L1 weighted regularization on the output. A sparse prior may be enforce on the output of the neural network by using the l.sub.1 norm, optionally after weighting. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (far from the center of k-space), the smaller the weight. This encourages sparsity. [0130] Generative Adversarial Nets loss [0131] Structured similarity index loss [0132] Any of the other loss functions described herein including in connection with FIGS. 1A-1E and 2A-2B.’ -The k-space coordinates used for applying these weights amounts to the measurement matrix region. K-space is the spatial frequency domain where the MR signals are acquired, which represent the raw measurements. Therefore, increasing the weight of errors in regions of higher spatial frequency (i.e., father from the center of k-space) during training, as it differentially weights errors based on their location within the measurement domain (k-space). This means errors in the outer, high frequency parts of k-space are given more important than errors in the inner, lower-frequency parts. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the CNN of Zhang to include the teachings of Lazarus for the advantage of providing and improved apparatus for causing the model to keep the high frequencies which are noisier than the low frequencies, for capturing and preserving find anatomical details and sharp edges in the images, as suggested by Lazarus, ¶0100, ¶0123-0132. Claim 13: Zhang discloses: n image processing method of correcting ringing in measurement data consisting of nuclear magnetic resonance signals collected by a magnetic resonance imaging apparatus, the image processing method comprising: (Title, ‘MRI Gibbs-ringing artifact reduction by means of machine learning using convolutional neural networks’) -Zhang focuses on reducing Gibbs-ringing artifact in MRI images, pg. 2133-section (purpose, theory and methods, results, conclusion). This artifact is fundamentally caused by “insufficient sampling of high-frequency data in the k-space domain” during MRI data acquisition, pg. 2133-Introduction right col. While the CNN’s input is an image, pg. 2134-Methods right col, ‘artifact‐free image from image X. The GRA‐CNN algorithm uses a CNN to learn an end‐to‐end mapping F, which inputs an original image with ringing artifact and outputs an artifact map. For simplicity, we call X the original image and output F(X) the artifact map’, the methods final step involves enforcing data fidelity by using “measured data” in the k-space domain, pg. 2134-Introduction left col ¶3, ‘The artifact‐free image is then obtained by subtracting the CNN‐estimated artifact from the original image and replacing the low‐frequency k‐space part with the measured data.’, pg. 2135-Method left col ¶2, ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’; see also pg. 2133-section (purpose, theory and methods, results, conclusion). This training data also involves creating images with ringing artifact by “truncating high-frequency components in the k-space domain”, pg. 2134-1st para of 2.1.2 Training right col, which implies the manipulation of the acquired k-space data. a step of training a CNN to generate an output image in which the ringing of an input image is corrected; and -The GRA-CNN algorithm utilizes a deep convolutional neural network (CNN). This network is a “4-layered convolutional network” trained to learn an end-to-end mapping, pg. 2134-Introduction left col ¶3, pg. 2134-2.1.1 | Formulation ¶1. The CNN is trained to extract artifact information (an artifact map) from an original image with ringing artifact, pg. 2133-section (purpose, theory and methods, results, conclusion), pg. 2134-Methods right col. This artifact map is then subsequently used to correct the ringing in the input image, “The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data.”-pg. 2134-Introduction left col ¶3, as also ‘With the trained CNN model, the artifact-free image can be obtained by subtracting the CNN-estimated artifact from the original image. To enforce data fidelity further to the measured data, we replace the low-frequency part of the ringing-removed image with measured data to obtain the final image.’. pg. 2135-2.1.2 | Training left col. ¶1-2. a step of applying the CNN to an input of a measurement image of the measurement data collected using a measurement matrix that has been reconstructed as a reconstructed image at a reconstruction matrix size of a reconstruction matrix in a case in which a size of the measurement matrix is less than the reconstruction matrix size of the reconstruction matrix; and -pg. 2136 2.2.2 | Implementation details, “All experiments were implemented using MATLAB (MATLAB 2014b, Mathworks) on a 64-bit Windows 8 workstation (Intel Xeon CPU and 128 GB RAM).” The Intel Xeon CPU is a type of processor. -Zhang discloses reconstructing a reconstruction matrix size larger than the measurement matrix, (conversely, the measurement matrix is less than the reconstruction matrix size). Zhang describes the measurement matrix referred to the measured data is smaller than the desired image size due to truncation and insufficient sampling, “Gibbs-ringing artifact in MR images is caused by insufficient sampling of the high frequency data”, pg.2133 [Abstract], “Gibbs-ringing artifact, also known as truncation or spectral leakage artifact, refers to a series of intensity oscillations near sharp edges in MR images.1, 2 This artifact is caused by insufficient sampling of high-frequency data in the k-space domain and usually appears when the acquisition window limits data acquisition.”-pg. 2133 [Introduction], Zhang further emphasizes “truncation” of data in the k-space domain, “Given a set of MR images without noticeable ringing artifact, their corresponding images with ringing artifact are obtained by truncating high-frequency components in the k-space domain.”-pg. 2134 [2.1.2 Training]. -Zhang explicitly mentions the use of “Zero-Padding interpolation”, which increases image matrix to created enlarged images. Hence, this process does indeed correspond to reconstructing measurement data at a size larger than the collected data, “Finally, the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. This processes effectively reconstructs the measurement data at a reconstruction size (i.e., the enlarged size) that is indeed larger than the measurement matrix size (i.e., the measured data). -Although Zhang teaches: see “an original size of 320 × 320 and then resized to 256 × 256 images by cropping and zero-padding.”, pg. 2135 [2.1.1 Dataset]. This is distinct from the actual ringing correction process or the relationship between measurement and construction matrices. The 320 x 320 to 256 x 256 step is for training preparation [emphasis added]. Zhang describes extracting images with an original size of 320 x 320 and resizing them to 256 x 256 by “cropping an zero-padding”. However, this step is strictly to generate a standardized set of “artifact free-images for training” the neural network. These 256 x 256 images effectively serving as a ground truth or references image, not the measured data with artifacts. -Zhang teaches the machine learning method, “the GRA-CNN method can also be integrated with zero-padded interpolation, which is usually used for increasing image matrix in practice, to mitigate ringing artifact in enlarged images.”, pg. 2143 [4. Discussion]. The estimating of the artifact uses a convolutional neural network, “This study aims to develop a CNN-based method for Gibbs-ringing artifact reduction in MR images. We use a CNN that learns an end-to-end mapping to extract artifact information from an original image. The artifact-free image is then obtained by subtracting the CNN-estimated artifact from the original image and replacing the low-frequency k-space part with the measured data. For brevity, we name the proposed method as Gibbs-ringing artifact reduction using CNN (GRA-CNN). The performance of GRA-CNN was evaluated on in vivo brain MRI data. A preliminary report of this study was published in a conference proceeding”- pg. 2134 [Introduction], “The ringing artifact was extracted from the original image using a deep convolutional neural network and then subtracted from the original image to obtain the artifact-free image.”, pg. 2133 [Abstract]. a step of generating a ringing-corrected image based on an output image of the CNN in which ringing in the reconstructed image is corrected and the measurement data, wherein, in the step of training the CNN, the CNN is trained by using, as training data, a correct answer image in which ringing has not occurred and an image obtained by performing an inverse Fourier transform on k-space data in which a size of a measurement matrix region is smaller than a matrix size of the correct answer image, and -After the CNN-estimated artifact is subtracted from the original image to produce an intermediate “ringing-removed image”, the method then “replace[s] the low-frequency k-space part with the measured data”, pg. 2134-Introduction left col ¶3, or “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image’, pg. 2135-2.1.2 | Training left col. ¶1-2. This replacement implicitly requires that a Fourier transform is performed in the context of the method on the intermediate artifact-reduced image to obtain its k-space representation, before the low-frequency components are swapped with measured k-space data. - “replace[s] the low-frequency k-space part of the ringing-removed image with measured data to obtain the final image”, pg. 2135-2.1.2 | Training left col. ¶1-2. This process would require an inverse transform to convert the modified k-space data (composite data) into a final image. Zhang fails to disclose: increasing a weight of an error in the measurement matrix region in a loss evaluation function during training. However, Lazarus in the context of removing artifacts in MR data using CNN discloses, increasing a weight of an error in the measurement matrix region in a loss evaluation function during training. -L2 weighted loss between output and target data as one of the loss function that may be employed for training the CNN, ¶0041, ‘generating an MR image from input MR data at least in part by using a neural network model (e.g., a model comprising one or more convolutional layers) to suppress at least one artefact in the input MR data’; ¶[0125-0132, ‘In some embodiments, one or a linear combination of multiple loss functions may be employed to train the neural network models described herein: [0126] L2 loss between output and target data [0127] L1 loss between output and target data [0128] L2 weighted loss between output and target data. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (the farther from the center of k-space), the higher the weight. Using such weights causes the resulting model to keep the high spatial frequencies which are noisier than the low frequencies [0129] L1 weighted regularization on the output. A sparse prior may be enforce on the output of the neural network by using the l.sub.1 norm, optionally after weighting. The weights may be calculated based on the k-space coordinates. The higher the spatial frequency (far from the center of k-space), the smaller the weight. This encourages sparsity. [0130] Generative Adversarial Nets loss [0131] Structured similarity index loss [0132] Any of the other loss functions described herein including in connection with FIGS. 1A-1E and 2A-2B.’ -The k-space coordinates used for applying these weights amounts to the measurement matrix region. K-space is the spatial frequency domain where the MR signals are acquired, which represent the raw measurements. Therefore, increasing the weight of errors in regions of higher spatial frequency (i.e., father from the center of k-space) during training, as it differentially weights errors based on their location within the measurement domain (k-space). This means errors in the outer, high frequency parts of k-space are given more important than errors in the inner, lower-frequency parts. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the CNN of Zhang to include the teachings of Lazarus for the advantage of providing and improved apparatus for causing the model to keep the high frequencies which are noisier than the low frequencies, for capturing and preserving find anatomical details and sharp edges in the images, as suggested by Lazarus, ¶0100, ¶0123-0132. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kamada et al (US 2014/0197835 A1) in the context of magnetic resonance artifacts causes by non-inform data density in k-space reduction teaches this principle, discloses, generate a ringing-corrected image by performing an inverse Fourier transform on the composite data. (¶0014, ‘a signal rearrangement step of rearranging the unit k-space data after correction in Cartesian coordinate system k-space; and a final imaging step of reconstructing an image by performing an inverse Fourier transform of data after rearrangement by the rearrangement unit’) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Robinson whose telephone number is (571)272-9019. The examiner can normally be reached M-F 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.A.R./Examiner, Art Unit 3798 /PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Oct 04, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §102, §103
Nov 17, 2025
Interview Requested
Dec 02, 2025
Examiner Interview Summary
Dec 02, 2025
Applicant Interview (Telephonic)
Jan 13, 2026
Response Filed
Jan 24, 2026
Final Rejection — §102, §103
Mar 27, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594024
METHOD FOR PREDICTING SURVIVAL OF NON SMALL CELL LUNG CANCER PATIENTS WITH BRAIN METASTASIS
2y 5m to grant Granted Apr 07, 2026
Patent 12569219
METHODS AND SYSTEMS FOR VALVE REGURGITATION ASSESSMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12569142
Method And System For Context-Aware Photoacoustic Imaging
2y 5m to grant Granted Mar 10, 2026
Patent 12569154
PATHLENGTH RESOLVED CW-LIGHT SOURCE BASED DIFFUSE CORRELATION SPECTROSCOPY
2y 5m to grant Granted Mar 10, 2026
Patent 12564381
SYSTEMS AND METHODS FOR CONTRAST ENHANCED IMAGING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+54.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month