Prosecution Insights
Last updated: April 18, 2026
Application No. 18/315,298

SYSTEM AND METHOD FOR IMPROVING ANNOTATION ACCURACY IN MRI DATA USING MR FINGERPRINTING AND DEEP LEARNING

Non-Final OA §103
Filed
May 10, 2023
Examiner
BURLESON, MICHAEL L
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Case Western Reserve University
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
68%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
365 granted / 489 resolved
+12.6% vs TC avg
Minimal -6% lift
Without
With
+-6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Species I, claims 1-8 and 16-19 in the reply filed on 12/24/25 is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/03/23 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gulani et al US 20180231626 in view of Kustra et al US 20210259775 in view of Brauer et al US 20220318986. Regarding claim 1. Gulani et al teaches a method for creating an automated system for determining disease states and conditions using magnetic resonance fingerprinting (MRF) data and magnetic resonance imaging (MRI) data acquired from a patient, the method comprising: (a) accessing group MRF data acquired from a group of patients (the 3D MRF method was applied to ten normal volunteers (mean age, 23.5±5.3 years; range, 18-35 years) and six patients with invasive ductal carcinoma (mean age, 52.3±10.7 years; range, 39-65 years). (paragraph 0095); (b) accessing MRI data acquired from the group of patients (For each patient, a clinical dynamic contrast enhanced MRI scan was also performed after the 3D MRF acquisition (paragraph 0095); Gulani et al fails to teach (c) accessing annotated images, wherein the annotated images comprise bulk-pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class; (d) training a machine learning system using the annotated images and the MRF data acquired from the group of patients using a patch-based approach to perform a pixel-based analysis of pixels outside the bulk-pixel labels to generate an automated system for determining disease states and conditions using a pixel-based machine learning system. Kustra et al teaches (c) accessing annotated images, wherein the annotated images comprise bulk-pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class (the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images (paragraph 0063). the annotated image may contain only two kinds of labels, one (a label represented by the dark gray square in the annotated image) for the DOI and another (a label represented by the white areas of the annotated image) for portions of the specimen image that do not include a DOI (paragraph 0064) Note: the specimen image is read as annotated images and the two DOI labels are read as tissue class (d) training a machine learning system using the annotated images and the MRF data acquired from the group of patients to generate an automated system for determining disease states and conditions using a pixel-based machine learning system (The input towards planning starts with imaging data like CT data, MR imaging (MRI) data or ultrasound imaging data (annotated images), wherein the anatomical positions of target structures like tumors and of organs at risk are identified through a delineation procedure. In contrast to this, the therapy planning device described above uses the quantitative nature of MRF such that the specific tissue properties can be identified before the intervention and used as input for the planning of the ablation settings, Also the result of the ablation therapy, i.e. the effect of the ablation therapy on the tissue, can be quantitatively measured afterwards based on MRF. Thus, the above described therapy planning device does not only take the geometry into account, but the specific tissue parameters of the patient are used to compute a personalized treatment plan (determining disease states and condition). The measured therapy outcomes are further fed into the learning system, i.e. into a further training of the machine learning module (paragraph 0066) Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al to include: (c) accessing annotated images, wherein the annotated images comprise bulk-pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class; (d) training a machine learning system using the annotated images and the MRF data acquired from the group of patients using a patch-based approach to perform a pixel-based analysis of pixels outside the bulk-pixel labels to generate an automated system for determining disease states and conditions using a pixel-based machine learning system. The reason of doing so would be to accurately label and identify different types and states of tissue disease in an image. Gulani in view of Kustra fails to teach using a patch-based approach to perform a pixel-based analysis of pixels outside the bulk-pixel labels Brauer et al teaches using a patch-based approach to perform a pixel-based analysis of pixels outside the bulk-pixel labels (computer subsystem may use candidate and annotated images to train the semantic image segmentation model to detect the defect in the patch images, which may be performed as described further herein. The computer subsystem may then use the trained semantic image segmentation model to perform defect detection on patch images, as shown in step 214. In other words, after the training done using the candidate and annotated images, the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images. (paragraph 0063) Note: the computer subsystem uses patch images to label specimen images as defective. The use of patches of images is read as patch based approach in which pixel labels, which are read as bulk pixels, are analyzed and used to label specimen images, which are read as pixels outside of bulk pixels Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al to include: using a patch-based approach to perform a pixel-based analysis of pixels outside the bulk-pixel labels The reason of doing so would be to accurately label different types and states of tissue disease in an image. Regarding claim 3. Gulani et al in view of Kustra et al further in view of Brauer et al teach wherein accessing the MRF data comprises: accessing MRF time course data (Gulani et al: the SVD algorithm is applied to the MRF dictionary in the time domain (paragraph 0083); compressing the MRF time course data using singular value decomposition to represent the MRF time course data using a plurality of singular values (Gulani et al: the SVD compression can be applied to the raw k-space data before gridding and taking the inverse Fourier transform (paragraph 0083); and defining the MRF data based on the singular values (Gulani et al: In the instance an SVD algorithm is used to process the dictionary, the singular values, instead of the undersampled volumes, are reconstructed and matched to the compressed MRF dictionary to retrieve the underlying tissue properties (paragraph 0084). Regarding claim 16, Gulani et al teaches An automated system for determining disease states and conditions using magnetic resonance fingerprinting (MRF) data and magnetic resonance imaging (MRI) data acquired from a patient (the 3D MRF method was applied to ten normal volunteers (mean age, 23.5±5.3 years; range, 18-35 years) and six patients with invasive ductal carcinoma (mean age, 52.3±10.7 years; range, 39-65 years). For each patient, a clinical dynamic contrast enhanced MRI scan was also performed after the 3D MRF acquisition (paragraph 0095), the system comprising a controller configured to: receive reconstructed MRF data and MRI data acquired from the patient (the 3D MRF method was applied to ten normal volunteers (mean age, 23.5±5.3 years; range, 18-35 years) and six patients with invasive ductal carcinoma (mean age, 52.3±10.7 years; range, 39-65 years). For each patient, a clinical dynamic contrast enhanced MRI scan was also performed after the 3D MRF acquisition (paragraph 0095); Gulani et al fails to teach deliver reconstructed MRF data and the MRI data to a trained machine learning system, wherein the trained machine learning system was trained using MRF data acquired from a group of patients and annotated images acquired from the group of patients that comprise bulk- pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class; and Kustra et al teaches deliver reconstructed MRF data and the MRI data to a trained machine learning system, wherein the trained machine learning system was trained using MRF data acquired from a group of patients and annotated images acquired from the group of patients that comprise bulk- pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class (the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images (paragraph 0063). the annotated image may contain only two kinds of labels, one (a label represented by the dark gray square in the annotated image) for the DOI and another (a label represented by the white areas of the annotated image) for portions of the specimen image that do not include a DOI (paragraph 0064) Note: the specimen image is read as annotated images and the two DOI labels are read as tissue class; generate at least one machine-annotated image of the patient wherein each pixel in the at least one annotated image has an assigned tissue class (the annotated image may contain only two kinds of labels, one (a label represented by the dark gray square in the annotated image) for the DOI and another (a label represented by the white areas of the annotated image) for portions of the specimen image that do not include a DOI (paragraph 0064) Note: the annotated image is generated with the two DOI labels are read as tissue class. Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al to include: deliver reconstructed MRF data and the MRI data to a trained machine learning system, wherein the trained machine learning system was trained using MRF data acquired from a group of patients and annotated images acquired from the group of patients that comprise bulk- pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class The reason of doing so would be to accurately label different types and states of tissue disease in an image. Gulani in view of Kustra fails to teach wherein the trained machine learning system performs a pixel-by-pixel analysis of the reconstructed MRI data to assign each pixel to a tissue class including at least a normal tissue class and a pathological tissue class; Brauer et al teaches wherein the trained machine learning system performs a pixel-by-pixel analysis of the reconstructed MRI data to assign each pixel to a tissue class including at least a normal tissue class and a pathological tissue class (computer subsystem may use candidate and annotated images to train the semantic image segmentation model to detect the defect in the patch images, which may be performed as described further herein. The computer subsystem may then use the trained semantic image segmentation model to perform defect detection on patch images, as shown in step 214. In other words, after the training done using the candidate and annotated images, the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images. (paragraph 0063) Note: the computer subsystem uses patch images to label specimen images as defective. The use of patches of images is read as patch based approach in which pixel labels, which are read as bulk pixels, are analyzed and used to label specimen images, which are read as pixels outside of bulk pixels Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al to include: deliver reconstructed MRF data and the MRI data to a trained machine learning system, wherein the trained machine learning system was trained using MRF data acquired from a group of patients and annotated images acquired from the group of patients that comprise bulk- pixel labels assigning bulk pixels in the annotated images to at least one tissue class, including a normal tissue class or a pathological tissue class The reason of doing so would be to accurately label different types and states of tissue disease in an image. Regarding claim 17, Gulani et al in view of Kustra et al further in view of Brauer et al teaches delivering, to the trained machine learning system, an annotated image of the patient assigning bulk pixels to at least one of the normal tissue class or the pathological tissue class and, wherein the machine annotated image includes pixels that are reassigned by the trained machine learning system relative to the annotated image of the patient (Brauer et al: computer subsystem may use candidate and annotated images to train the semantic image segmentation model to detect the defect in the patch images, which may be performed as described further herein. The computer subsystem may then use the trained semantic image segmentation model to perform defect detection on patch images. In other words, after the training done using the candidate and annotated images, the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images. (paragraph 0063) Note: the computer subsystem uses patch images to label specimen images as defective. The use of patches of images is read as patch based approach in which pixel labels, which are read as bulk pixels, are analyzed and used to label specimen images, which are read as pixels outside of bulk pixels Regarding claim 18, Gulani et al in view of Kustra et al further in view of Brauer et al teach wherein the pixel-by-pixel analysis is a patch-based analysis in which assigning each pixel to a tissue class is based on a patch of MRF data surrounding each pixel (Brauer et al: the computer subsystem may input other specimen images into the semantic segmentation model, which may label pixels in the specimen images as defective or not thereby performing defect detection on the patch images. (paragraph 0063) Note: the computer subsystem uses patch images to label specimen images as defective. The use of patches of images is read as patch based approach in which pixel labels, which are read as bulk pixels, are analyzed and used to label specimen images, which are read as pixels outside of bulk pixels Regarding claim 19, Gulani et al in view of Kustra et al further in view of Brauer et al teaches wherein the MRF data comprises singular values produced by a singular value decomposition of an MRF time course at each pixel of the reconstructed MRI data (Gulani et al: A Singular Value Decomposition (SVD) based processing method may be implemented for efficient image reconstruction and template matching. In this instance, the SVD algorithm is applied to the MRF dictionary in the time domain to produce a low-rank approximation (paragraph 0083). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gulani et al US 20180231626 in view of Kustra et al US 20210259775 in view of Brauer et al US 20220318986 further in view of Domracheva et al US 20190148005. Regarding claim 2, Gulani et al in view of Kustra et al further in view of Brauer et al teach all of the limitations of claim 1 Gulani et al in view of Kustra et al further in view of Brauer et al fails to teach wherein to train the machine learning system, for a given pixel proximate to bulk-pixel labels of the at least one tissue class, the machine learning system analyzes the given pixel using the at least one tissue class of the proximate bulk-pixel label as ground truth. Domracheva et al teaches wherein to train the machine learning system, for a given pixel proximate to bulk-pixel labels of the at least one tissue class, the machine learning system analyzes the given pixel using the at least one tissue class of the proximate bulk-pixel label as ground truth (a source image 601 from a first dataset comprising the plurality of CBCT dental images, shown in FIG. 6A, may be manually segmented. Through manual segmentation, the user is able to manually assign labels to each pixel, a process creating ground truth data for training semantic segmentation protocols (paragraph 0076) Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al further in view of Brauer et al to include: wherein to train the machine learning system, for a given pixel proximate to bulk-pixel labels of the at least one tissue class, the machine learning system analyzes the given pixel using the at least one tissue class of the proximate bulk-pixel label as ground truth The reason of doing so would be to accurately label different types and states of tissue disease in an image. Claim(s) 4 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gulani et al US 20180231626 in view of Kustra et al US 20210259775 in view of Brauer et al US 20220318986 further in view of Hermle et al US 20230177366. Regarding claim 4. Gulani et al in view of Kustra et al further in view of Brauer et al teach all of the limitations of claim 1 Gulani et al in view of Kustra et al further in view of Brauer et al fails to teach further comprising randomly assigning each of the group of patients to one of k groups of which k - 1 groups are defined as a training data set used to train the machine learning system. Hermle et al teaches further comprising randomly assigning each of the group of patients to one of k groups of which k - 1 groups are defined as a training data set used to train the machine learning system (Users are randomly selected from this set for inclusion in either a first subgroup or a second subgroup (the group of patients to one of k groups of which k - 1 groups). (paragraph 0029). the observations from the different subgroups may be used as training data when training a machine-learned model to forecast content performance in an online network. This essentially builds the incrementality calculation into the machine-learned model (paragraph 0032) Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al further in view of Brauer et al to include: further comprising randomly assigning each of the group of patients to one of k groups of which k - 1 groups are defined as a training data set used to train the machine learning system. The reason of doing so would be to provide training data to produce the best and accurate results. Regarding claim 5, Gulani et al in view of Kustra et al further in view of Brauer et al further in view of Hermle et al teaches further comprising repeating (d) a plurality of times with different groups of patients to generate the automated system for determining disease states and conditions using the pixel-based machine-learning system (Kustra et al: thus, the therapy planning device does not only take the geometry into account, but the specific tissue parameters of the patient are used to compute a personalized treatment plan (determining disease states and condition). The measured therapy outcomes are further fed into the learning system, i.e. into a further training of the machine learning module (repeating) (paragraph 0066). Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al further in view of Brauer et al to include: further comprising repeating (d) a plurality of times with different groups of patients to generate the automated system for determining disease states and conditions using the pixel-based machine-learning system The reason of doing so would be to accurately label different types and states of tissue disease in an image. Claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gulani et al US 20180231626 in view of Kustra et al US 20210259775 in view of Brauer et al US 20220318986 further in view of Hermle et al US 20230177366 further in view of Samavati et al US 20210407078. Regarding claim 6, Gulani et al US 20160282430 in view of Kustra et al further in view of Brauer et al further in view of Hermle et al teaches all of the limitations of claims 1 and 4 Gulani et al US 20160282430 in view of Kustra et al further in view of Brauer et al further in view of Hermle et al fails to teach generating a probability map for the tissue class, wherein the probability map includes tissue classes assigned to pixels outside the bulk pixels in the annotated images. Samavati et al teaches generating a probability map for the tissue class, wherein the probability map includes tissue classes assigned to pixels outside the bulk pixels in the annotated images (the imager 310 can be placed in an operation room to acquire OCT images of the tissue 305 that is excised from a patient, and the system 300 is configured to determine a probability of abnormality associated with the OCT images and to generate annotated images (paragraph 0037) the annotated image includes a map of binary probabilities, i.e., each pixel either has a probability of 0 or 1, indicative of a probability of abnormality (paragraph 0038) Therefore, it would have been obvious to one of ordinary skill in the art to modify Gulani et al in view of Kustra et al further in view of Brauer et al further in view of Hermle et al to include: generating a probability map for the tissue class, wherein the probability map includes tissue classes assigned to pixels outside the bulk pixels in the annotated images. The reason of doing so would be to accurately label different types and states of tissue disease in an image. Regarding claim 7, Gulani et al in view of Kustra et al further in view of Brauer et al further in view of Hermle et al further in view of Samavati et al teach wherein using the patch-based approach to perform the pixel-based analysis comprises training the machine learning using a 1 pixel x 1 pixel x 1 pixel patch of the MRF data (Samavati et al: Since the second operation extracts features by considering only non-overlapping 2×2×2 volume patches, the size of the resulting feature maps is halved (paragraph 0022) Note: by halving the feature map, the patch size is reduced to 1x1x1 Regarding claim 8, Gulani et al in view of Kustra et al further in view of Brauer et al further in view of Domracheva et al further in view of Hermle et al further in view of Samavati et al teaches wherein using the patch-based approach to perform the pixel-based analysis comprises training the machine learning using a patch that is larger than 1 pixel x 1 pixel x 1 pixel to account for spatial correlation in the MRF data (Samavati et al: the second operation extracts features by considering only non-overlapping 2×2×2 volume patches, the size of the resulting feature maps is halved. This strategy can serve a similar purpose as pooling layers, which are usually inserted in-between successive convolution layers to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in a network (paragraph 0022). Conclusion Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Michael Burleson Patent Examiner Art Unit 2681 Michael Burleson April 3, 2026 /MICHAEL BURLESON/ /AKWASI M SARPONG/SPE, Art Unit 2681 4/6/2026
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
Apr 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603965
PRINTING DEVICE SETTING EXPANDED REGION AND GENERATING PATCH CHART PRINT DATA BASED ON PIXELS IN EXPANDED REGION
2y 5m to grant Granted Apr 14, 2026
Patent 12585826
DOCUMENT AUTHENTICATION USING ELECTROMAGNETIC SOURCES AND SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12566125
SEQUENCER FOCUS QUALITY METRICS AND FOCUS TRACKING FOR PERIODICALLY PATTERNED SURFACES
2y 5m to grant Granted Mar 03, 2026
Patent 12561548
SYSTEM SIMULATING A DECISIONAL PROCESS IN A MAMMAL BRAIN ABOUT MOTIONS OF A VISUALLY OBSERVED BODY
2y 5m to grant Granted Feb 24, 2026
Patent 12562549
LIGHT EMITTING ELEMENT, LIGHT SOURCE DEVICE, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY, AND BIOLOGICAL INFORMATION ACQUISITION APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
68%
With Interview (-6.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month