Prosecution Insights
Last updated: April 19, 2026
Application No. 17/524,591

Systems and Methods for Uncertainty Quantification in Radiogenomics

Final Rejection §103§112
Filed
Nov 11, 2021
Examiner
RUDOLPH, VINCENT M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Mayo Foundation for Medical Education and Research
OA Round
4 (Final)
44%
Grant Probability
Moderate
5-6
OA Rounds
5y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
114 granted / 260 resolved
-18.2% vs TC avg
Strong +42% interview lift
Without
With
+42.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
37 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 260 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 11/14/2025 has been entered and made of record. Claims 1-19 remain pending in this application. Applicant’s amendments to claim 1 have overcome the rejection under 35 U.S.C. § 112(b) previously set forth in the Non-Final Office Action dated 7/14/2025 and is thus withdrawn. However, the 112(a) is maintained as addressed below. Response to Arguments Applicant’s arguments filed 11/14/2025, with respect to the rejection of claim 1 under 35 U.S.C. § 112 have been fully considered but are not persuasive. When accessing support for new matter, it is not based on whether “a person having ordinary skill in the art would plainly understand that machine learning models are typically trained on a different set of data than is used during inference”, but instead if there is true support in the specification that can justify the amendments made. In this case, there is nothing in the originally filed specification about the medical image data being “new”; instead as seen in Paragraphs [0098]-[00101] and Figure 3, everything is tied to “medical image data”. Based on those citations, a person of ordinary skill would then understand it is only tied to “medical image data” and that nothing related about “new” medical image data can be seen from the Figure or the specification. As such, the rejection is maintained. Applicant additionally argues that there is no motivation to combine Jungo and Khalvati because they are used for different tasks. The examiner respectfully disagrees. Regardless how they are being used, both references are used in relation to medical image data to gather and train data. Thus, one of ordinary skill in the art would be compelled to combine these two references in order to utilize reproducible features to be used in the medical image machine learning tasks. Because of this, the prior arts do in fact teach each limitation as fully disclosed in the rejection below. Based on these facts, this action is made FINAL. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding independent claim 1 as amended, Examiner has searched the specification and has not found evidence that the application has support for claiming “new medical image data”. Paragraph [0089] in Applicant’s filed specification appears to assemble the training data from the medical image data, and further [0098]-[00101] discloses accessing and inputting the medical data, however there is no mention that the medical image data used is “new” from the current medical image data. Thus, there is no support from the specification for the cited limitation. For the purposes of examination, Examiner assumes the “new medical image data” to be the same medical image data used for both training and inference. Claims 2-19 are rejected by nature of their dependencies on claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jungo et al (“Towards Uncertainty-Assisted Brain Tumor Segmentation and Survival Prediction,” 2018) in view of Khalvati et al (“Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models,” 2015). Regarding claim 1, Jungo teaches a method for generating biological marker prediction data from medical images, the method comprising: (a) accessing a trained machine learning model with a computer system, wherein the machine learning model has been trained on training data in order to generate biological marker prediction data and corresponding predictive uncertainty data from medical image data, wherein the training data include extracted features generated from the medical image data as input variables and wherein the machine learning model learns to measure predictive uncertainties from the extracted features (Jungo > Section 2.1 > FRNN Architecture > para 2: a machine learning network is trained to output a tumor compartment segmentation and label prediction from extracted features of a medical image input (i.e., biological marker prediction data; see para [0079] of Applicant’s originally filed specification); Section 2.1 > Uncertainty Estimation > para 2: predictions are also produced with corresponding uncertainty estimations; see Figure 5); (b) accessing new medical image data with the computer system, wherein the new medical image data comprise medical images of a subject obtained using a medical imaging system (Jungo > Section 3.1 > para 3: the trained machine learning model is validated using medical images of 20 training subjects from a validation dataset); (c) inputting the new medical image data to the trained machine learning model, generating output as biological marker prediction data and corresponding predictive uncertainty data, wherein the biological marker prediction data comprise biological marker predictions and corresponding predictive uncertainty data comprise quantitative measures of an uncertainty of each biological marker prediction in the biological marker prediction data (Jungo > Figure 5 and Section 3.1 > last para: during validation, the trained model generates tumor region predictions along with corresponding uncertainty maps); and (d) displaying the biological marker prediction data and predictive uncertainty data to a user (Jungo > Figure 5: generated tumor region predictions with corresponding uncertainty maps). However, Jungo does not specifically teach where Khalvati teaches wherein the training data include texture features generated from medical image data as input variables and wherein the machine learning model learns to measure predictive uncertainties from the texture features (Khalvati > Texture feature model (p. 5): in identifying cancer in medical image data for radiomics, texture feature models are trained using extracted texture features (see p. 5 > Feature extraction), and performance metrics are measured for the texture feature models) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jungo using Khalvati’s teachings by incorporating using texture features of medical image data to learn the predictive uncertainties Jungo, in order to utilize reproducible features to be used in the medical image machine learning tasks. Regarding claim 18, Jungo, as modified by Khalvati, further teaches the method of claim 1, wherein the biological marker prediction data comprise tissue characteristic prediction data that indicate predictions of tissue characteristics in the subject (Jungo > Section 2.1 > FRRN Architecture > para 2: machine learning model segments and labels distinct tumor compartments of an input image, a characteristic of biological tissue). Regarding claim 19, Jungo, as modified by Khalvati, further teaches the method of claim 18, wherein the tissue characteristics include at least one of molecular pathways, quantity of tumor cell density, location of tumor cell density, and non-tumoral cells type classification (Jungo > Section 2.1 > FRRN Architecture > para 2: machine learning model segments and labels distinct tumor compartments (i.e., location of tumor cell density) of an input image). Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Jungo, as modified by Khalvati, in view of Wang et al (“Transductive Gaussian Process for Image Denoising,” 2014, provided by Applicant’s Information Disclosure Statement – IDS). Regarding claim 2, neither Jungo nor Khalvati specifically teach where Wang teaches the method of claim 1, wherein the machine learning model is a Gaussian process model (Wang > Section 3.1: in an image analysis prediction problem, a Gaussian Process model is used to learn a probability distribution between input variables and desired output variables). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Jungo (as modified by Khalvati) using Wang’s teaching by incorporating using a Gaussian process model as the machine learning model of Jungo (as modified by Khalvati), in order to non-linearly model the relationship between complex inputs and outputs such as images. Regarding claim 3, Jungo, as modified by Khalvati and Wang, further teaches the method of claim 2, wherein the Gaussian process model is a transductive learning Gaussian process model (Wang > Section 3.2: transductive regression is used in the Gaussian Process model in order to utilize self-similarity information). Claims 4-9 and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Jungo, as modified by Khalvati, in view of Gaw et al (WO 2019/100032 A2). Regarding claim 4, neither Jungo nor Khalvati specifically teach where Gaw teaches the method of claim 1, wherein the machine learning model is a knowledge-infused global-local data fusion model (Gaw > [0008-0010]: for tumor segmentation, a hybrid machine learning model is used which combines traditional machine learning methods using a mechanistic model that integrates prior knowledge, and is also trained using labeled biopsy data (local) and unlabeled MRI data (global); this reflects the use of “knowledge-infused global-local data fusion model” as disclosed starting at [0039] of Applicant’s published specification). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jungo (as modified by Khalvati) using Gaw’s teachings by incorporating prior knowledge and labeled/unlabeled data to the machine learning model of Jungo (as modified by Khalvati), in order to increase prediction accuracy using diverse sources of training data (see Gaw > [0008]). Regarding claim 5, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 4, wherein the knowledge-infused global-local data fusion model is trained on the training data, the training data further comprising labeled biopsy samples as local data and medical imaging data as unlabeled global data (Gaw > [0010] and [0033]: hybrid machine learning model is trained using image-localized labeled biopsy data and unlabeled medical data that biopsy has not been performed on). Regarding claim 6, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 4, wherein the knowledge-infused global-local data fusion model integrates output data generated by a mechanistic model (Gaw > [0019]: hybrid model includes a combination of machine learning and a mechanistic model). Regarding claim 7, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 6, wherein the mechanistic model is a proliferation-invasion model (Gaw > [0023-0025]: mechanistic model is a proliferation-invasion (PI) model). Regarding claim 8, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 7, wherein the output data generated by the mechanistic model is a proliferation-invasion parameter map (Gaw > [0023]: PI model produces a tumor cell density map for a subject’s image). Regarding claim 9, neither Jungo nor Khalvati specifically teach where Gaw teaches the method of claim 1, wherein the machine learning model is trained on the training data, the training data further comprising image-localized tissue biopsy samples (Gaw > [0053] and [0068-0069]: training data for training the machine learning model comprises image-localized biopsies). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jungo (as modified by Khalvati) using Gaw’s teachings by using image-localized medical imaging for the training data of Jungo (as modified by Khalvati), in order to provide accurate locations of prediction data. Regarding claim 12, neither Jungo nor Khalvati specifically teach where Gaw teaches the method of claim 1, wherein the new medical image data comprise magnetic resonance image data (Gaw > [0009] and [0069]: image input for prediction includes magnetic resonance images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jungo (as modified by Khalvati) using Gaw’s teachings using MRI images as the input for the biological marker prediction of Jungo (as modified by Khalvati), in order to analyze biological features in a common medical imaging modality. Regarding claim 13, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 12, wherein the magnetic resonance image data comprise multiparametric magnetic resonance image data containing magnetic resonance images representative of a plurality of different magnetic resonance image contrast types (Gaw > [0069]: image input includes multiparametric magnetic resonance images). Regarding claim 14, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 13, wherein the plurality of different magnetic resonance image contrast types comprise at least two of T1-weighting, T1-weighting with a contrast agent, T2-weighting, diffusion weighting, and perfusion weighting (Gaw > Figure 1 and [0069]: multiparametric magnetic resonance contrast types include T1-weighted images and T2-weighted image; [0070]: additionally, diffusion tensor imaging and perfusion imaging can be utilized). Regarding claim 15, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 12, wherein the magnetic resonance image data comprise both magnetic resonance images and parametric maps representative of quantitative parameters generated using the magnetic resonance images (Gaw > [0069]: acquire image data includes magnetic resonance images and parametric maps generated from magnetic resonance images). Regarding claim 16, Jungo, as modified by Khalvati and Gaw, further teaches the method of claim 15, wherein the parametric maps comprise at least one of T1 maps, T2 maps, apparent diffusion coefficient maps, mean diffusivity maps, fractional anisotropy maps, cerebral blood volume maps, cerebral blood flow maps, and mean transit time maps (Gaw > [0069]: parametric maps generated from magnetic resonance images include fractional anisotropy maps and relative cerebral blood volume maps). Regarding claim 17, neither Jungo nor Khalvati specifically teach where Gaw teaches the method of claim 1, wherein the biological marker prediction data comprise genetic prediction data that indicate predictions of genetic features in the subject (Gaw > [0026-0027]: biological feature predictions are mapped based on a mechanistic model, which includes gene expression models using genomic characterization as biological feature data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jungo (as modified by Khalvati) using Gaw’s teachings by incorporating indicating genetic features as part of the biological marker prediction of Jungo (as modified by Khalvati), in order to provide information that optimizes the dose distribution of radiotherapy. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Jungo, as modified by Khalvati and Gaw, in view of Wang. Regarding claim 10, neither Jungo, Khalvati, nor Gaw specifically teach where Wang teaches the method of claim 9, wherein the machine learning model is trained on the training data using transductive learning, wherein the training data further comprises both labeled samples and unlabeled samples (Wang > Sections 3.1 and 3.2: in an image prediction training problem, a transductive Gaussian Process learns predictions using both training data (input data with associated output data) and unknown observations (testing data for which predictions are made)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Jungo (as modified by Khalvati and Gaw) using Wang’s teachings by incorporating transductive learning for training the machine learning model of Jungo (as modified by Khalvati and Gaw), in order to utilize unlabeled image data based on their similarity to labeled training samples (see Wang > Section 3.2 > last para). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Jungo, as modified by Khalvati, Gaw, and Wang, in view of van Engelen et al (“A survey on semi-supervised learning,” 2019). Regarding claim 11, neither Jungo, Khalvati, Gaw, nor Wang specifically teach where van Engelen teaches the method of claim 10, wherein the machine learning model is trained on the training data using transductive learning by: generating predictions for the unlabeled samples by applying the machine learning model to the labeled samples (van Engelen > Section 7 > paras 2-3: using labeled data, transductive methods generate predictions for the unlabeled data); generating a combined data set that combines the predictions for the unlabeled samples with the training data (van Engelen > Section 7.5 > para 3: the pseudo-labels for the unlabeled data are treated as true labels, and a training dataset is formed using the combined labeled and unlabeled data); and generating a predictive distribution for each new test sample using the combined data set (van Engelen > Section 7.5 > para 3: the combined dataset is used to train machine learning model to make predictions for new unseen data points). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Jungo (as modified by Khalvati, Gaw, and Wang) using van Engelen’s teachings by incorporating a combined dataset comprising predictions of unlabeled samples for training the machine learning model of Jungo (as modified by Khalvati, Gaw, and Wang), in order to utilize known labels to artificially increase the size of a training data set. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Rudolph whose telephone number is (571)272-8243. The examiner can normally be reached M-F 7:30 AM - 3:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT RUDOLPH/ Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Nov 11, 2021
Application Filed
Jun 15, 2024
Non-Final Rejection — §103, §112
Nov 21, 2024
Response Filed
Feb 21, 2025
Final Rejection — §103, §112
May 27, 2025
Request for Continued Examination
Jun 02, 2025
Response after Non-Final Action
Jul 10, 2025
Non-Final Rejection — §103, §112
Nov 14, 2025
Response Filed
Feb 11, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12525104
SURVEILLANCE SYSTEM AND SURVEILLANCE DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12492533
SYSTEM AND METHOD OF CONTROLLING CONSTRUCTION MACHINERY
2y 5m to grant Granted Dec 09, 2025
Patent 12430871
OBJECT ASSOCIATION METHOD AND APPARATUS AND ELECTRONIC DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12333853
FACE PARSING METHOD AND RELATED DEVICES
2y 5m to grant Granted Jun 17, 2025
Patent 12321856
METHOD, COMPUTER PROGRAM AND DEVICE FOR EVALUATING THE ROBUSTNESS OF A NEURAL NETWORK AGAINST IMAGE DISTURBANCES
2y 5m to grant Granted Jun 03, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
44%
Grant Probability
86%
With Interview (+42.0%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 260 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month