Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,813

SYSTEMS, METHODS, AND COMPUTER READABLE MEDIA FOR PARAMETRIC FDG PET QUANTIFICATION, SEGMENTATION AND CLASSIFICATION OF ABNORMALITIES

Final Rejection §103
Filed
Feb 19, 2024
Examiner
MEHL, PATRICK M
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
UNIVERSITY OF VIRGINIA PATENT FOUNDATION
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
72%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
178 granted / 375 resolved
-22.5% vs TC avg
Strong +25% interview lift
Without
With
+24.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
10 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
26.0%
-14.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Applicant's amendments and remarks, filed 10/14/2025, are acknowledged. Applicant's arguments have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Rejections and/or objections not reiterated from the previous office actions are hereby withdrawn. Status of Claims Claims 1-20 are currently under examination. Priority The instant application is a national stage entry under §371 of the international application PCT/US2022/040621 which was filed on 08/17/2022. Applicant’s claim for the benefit of priority under 35 U.S.C. 119(e) to provisional application 63/233,919, filed 08/17/2021, are acknowledged. Withdrawn Objections/Rejections The objection of claim 8 is withdrawn in view of applicant’s arguments and/or amendments. The claim rejections under 35 USC 112(b) or second paragraph are withdrawn in view of applicant’s arguments and/or amendments. Response to Arguments Applicant’s arguments filed 10/14/2025 regarding claim rejections under 35 U.S.C. 103 have been fully considered but are not persuasive for the following reasons. Applicant argues that Brynolfsson does not teach that “the imaging that is performed is FDG PET” and “utilize the images to generate kinetic rate parameters for each of the abnormality volumes and then utilizes the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes” as in claim 1. In response, the examiner first notes that the FDG use is only mentioned in the preamble and that no positive limitation is directed to the use of FDG as the substrate for the PET imaging and as seen in Claim 8, the positive limitation is directing to the use of “an administrated radioactive tracer” . The examiner notes therefore that the preamble “for performing FDG positron emission tomography (PET) quantification, segmentation, and classification of abnormalities” is considered as an intended use and therefore not given weight in the absence of positive limitation as directed to the use of FDG. Additionally, examiner notes that FDG at the time of the invention is considered as a generic, commonly used and conventional radionuclide for PET as described for clarification by Hopkins University for PET before the time of the instant invention (internet website https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/positron-emission-tomography-pet with Pubdate 06/13/2021) (see p.2 5th ¶ “FDG is widely used in PET scanning” for imaging diagnostics while others may be used) and the examiner used Wang to support this known commonly and conventional use of FDG for PET imaging diagnosis. Therefore the examiner is considering the argument for FDG not persuasive. Regarding the second part of the arguments, the examiner has considered that Brynolfsson is teaching the determination of the “hot spots” from dynamic imaging of the glucose processed by the tissues with these “hot spots” being defined presenting a higher/ “increased accumulation” of FDG in these regions compared to the surrounding regions ([0009]-[0010]) wherein Brynolfsson is further using machine learning module for classify the hotspots (abstract, [0036]-[0037]) wherein as clarification the image collection is performed as a video stream ([0118]) as over several scanning cycles for providing measures of radiopharmaceutical uptake within the different regions ([0010]) as the measure as the clarified “standard uptake value” as a kinetic rate parameter in [0014]. For further clarification, the machine learning is using ANN or CNN ([0032]) which are commonly and routinely known for being used for classification and regression for image analysis as clarified by Ren et al. (2019 Proc. of SPIE 11053: article 1105331 10 pages) in his review of the use of CNN for classification and regression analysis in Image Processing for feature extraction as conventional processing (p.4 3rd ¶) therefore the examiner is considering that Brynolfsson is teaching the use of machine learning module using CNN or ANN for classification and regression for extracting and classifying the hotspots from the medical images. Brynolfsson is also found teaching the classification of the hotspots as lesions and tumors (abstract, [0010], [0036]-[0037]) with more clarification for the classification in [0021]-[0024]. Therefore the examiner is considering the combination of Brynolfsson and Wang is teaching the limitation utilizing the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes, as claimed and therefore the examiner is considering the second argument part as no persuasive. Applicant further appears to argue that Wang fails to disclose FDG is commonly used for PET imaging. In response, the examiner is presenting his position as presented above with his response regarding Brynolfsson argument for FDG disclosure. With the same response, the examiner is considering the argument for Wang as not persuasive. Applicant appears to argue that Gray does not teach the generated kinetic rate parameters are for a parametric PET map to train a machine learning module to perform a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes and the Office did not provide a sufficient rational for combining Brynolfsson and Gray. In response, as discussed above, Brynolfsson has been relied upon for teaching the dynamic uptake measures using the video streams with teaching utilizing the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes with the classification of the hotspots for determining the targets as lesions at risk or tumors using the same or additional machine learning as used for the determination of the hotspot regions. The examiner has relied upon Gray for teaching the combined PET/MRI for the overlapping and determination by masking of the regions of interest for later classification as lesions or tumors wherein the anatomical analysis with the overlapping is an additional step for identifying the hotspots of interest for directing them for later classification as already taught by Brynolfsson as discussed above. Therefore Gray and Brynolfsson are from the same field of endeavor and the teaching of Gray is found to provide an additional step to provide a more accurate selection of the hotspots or regions of interest as related to anatomical region of the patient body as in order to predict the nature combined with the functional/physiological status of these “hotspots” regions for better classification and monitoring as suggested by Gray as reported in the Office Action. Therefore the examiner is considering the teachings and rational as presented as proper and therefore the examiner finds the argument as not persuasive. Applicant appears to argue for the dependent claims that the additional references of record are not analogous prior art since they are directed to application to different part of the subject body and would not provide a motivation to combine the references. In response, the examiner has provided the teaching field of endeavor for each of the additional references as directed to the same field of PET imaging of tissue for diagnosis and imaging processing wherein the applicability of the technique could perform for any regions on the patient’s body since the same imaging devices could be used for the claimed purposes. The mere difference in the regional tissues of the patient would not change the steps for image processing and analysis and therefore would not change a motivation for combining the teaching for the different references of record since each of the teachings provide an additional step or specificity which does not teach again the original invention as presented. Therefore the examiner is considering the generic argument as presented by the Applicant as not persuasive. In view of the clarifications provided presently in support of the previously filed Office Action, the examiner is considering the Applicant’s arguments as not persuasive and the claim rejections under 35 U.S.C. 103 are found proper. Therefore the examiner is maintaining these previous claim rejections with modifications to address the amended limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 6-9, 11, 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020) in view of Wang et al. (2017 Front. Oncol. 7: article 8 8 pages; Pub.Date 2017) in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012). Regarding independent claim 1, Brynolfsson teaches a method (Title and abstract) for performing FDG positron emission tomography (PET) quantification, segmentation, and classification of abnormalities (abstract system and method using PET imaging as functional imaging device and [0010] for detecting lesions and measuring radiopharmaceutical uptake within selected hotspots representing lesions for quantification, and (abstract) performing segmentation and classification for characterizing the lesion, tumor with estimation of disease severity and risk, wherein one of ordinary skill in the art would recognize that FDG is commonly used for PET imaging as taught by Wang when PET imaging is combined with MRI imaging (Title, abstract, p.2 col.2 ¶ Pre-treatment Imaging)) , the method comprising: receiving a plurality of magnetic resonance (MR) images corresponding to a target site of a subject ([0034], [0075], [0144] with Fig. 1B, receiving/accessing 3D anatomical image of subject targeting cancerous lesions within the subject using anatomical imaging modality such as magnetic resonance imaging, wherein one of ordinary skill in the art would recognize as obvious as the image would refer to a time series of image frames or plurality of conventional images since the term image is interpreted as a video stream according to Brynolfsson [0118]) ; generating three dimensional (3D) area masks of abnormality volumes from the plurality of MR images ([0052] “ the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map” and [0051] the invention is processing the 3D images of functional (PET) and anatomical (MRI) images to provide “a segmentation map that identifies, within the 3D functional image and/or the 3D anatomical image, one or more volumes of interest (VOIs), each VOI corresponding to a particular target tissue region and/or a particular anatomical region”) ; segmenting the 3D area masks into one or more individual seed images for each of the abnormality volumes ([0052] “ the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map” and [0051] the invention is processing the 3D images of functional (PET) and anatomical (MRI) images to provide “a segmentation map that identifies, within the 3D functional image and/or the 3D anatomical image, one or more volumes of interest (VOIs), each VOI corresponding to a particular target tissue region and/or a particular anatomical region”); […overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes…]; and utilizing the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes ([0036]-[0037] using a machine learning module (hotspot classification module) determine the lesion classification for each hotspot to select a subset of one or more hotspot having a high likelihood of corresponding to cancerous lesions, based at least on intensities of hotspots and segmentation map, reading on the accumulation rate of the radiopharmaceutical compounds and anatomical masks discussed above). Brynolfsson with Wang does not specifically teach overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes as in claim 1. However, Gray teaches within the same field of endeavor of multimodal imaging FDG-PET and MRI technology (Title, abstract and p.222 col.2 last ¶) the co-registration between MRI and FDG-PET images in native MRI image space creating MRI-space FDG-PET images (p.223 col.1 2nd ¶ and col.2 2nd ¶) wherein each of the MRI-space FDG-PET image is overlaid with the corresponding masked anatomical segmentation for defining different regions of interest to assess the cerebral metabolic rate of glucose for the time change of the glucose signal (p.224 col.1 last ¶ to col.2 1st ¶ Regional feature extraction and classification) leading to the classification of the regions of interest (p.224 col.2 2nd-3rd ¶) therefore Gray teaches the concept of overlaying anatomical masks from MRI imaging onto co-registered MRI FDG-PET images to select masked regions for which the change of metabolic rate of glucose is being assessed for later classification of these regions therefore teaching overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the method of Brynolfsson as modified by Wang such that the method further comprises: overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes, since one of ordinary skill in the art would recognize that overlaying anatomical masks of regions of interest from MRI of the subject on FDG-PET images being co-registered with the MRI images of the patient to access to metabolic rate of glucose within these regions were known in the art as taught by Gray. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson and Wang teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions, as suggested by Gray (p.227 col.2 2nd and 4th ¶). Regarding the dependent claims 2, 4, 6, 7, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Brynolfsson, Wang and Gray. Regarding claim 2, Brynolfsson does not teach the MR images are T1-weighted but Gray teaches the MR images are T1-weighted (p.223 col.1 last ¶) as commonly known for acquiring anatomical/structural images. Regarding claim 4, Brynolfsson teaches also the imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the target site condition assessment includes a tumor progression (TPR) assessment or a treatment related necrosis (TRN) assessment as claimed. Regarding claim 6, Brynolfsson teaches also the use of machine learning module for classifying the targets (abstract with AI-based detection and classification, [0009], [0022]) imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the logistic regression engine is subjected to supervised machine learning (ML) to classify the abnormality volumes as claimed. Regarding claim 7, as discussed above, Gray teaches the co-registration between MRI and FDG-PET images in native MRI image space creating MRI-space FDG-PET images (p.223 col.1 2nd ¶ and col.2 2nd ¶) wherein each of the MRI-space FDG-PET image is overlaid with the corresponding masked anatomical segmentation for defining different regions of interest to assess the cerebral metabolic rate of glucose for the time change of the glucose signal (p.224 col.1 last ¶ to col.2 1st ¶ Regional feature extraction and classification) leading to the classification of the regions of interest (p.224 col.2 2nd-3rd ¶) therefore Gray teaches the concept of overlaying anatomical masks from MRI imaging onto co-registered MRI DFG-PET images to select masked regions for which the change of metabolic rate of glucose is being assessed for later classification of these regions therefore teaching, prior classification, receiving the co-registered parametric PET maps corresponding to the target site of the subject as claimed. Regarding independent claim 8, Brynolfsson teaches a system (Title and abstract) for performing FDG positron emission tomography (PET) quantification, segmentation, and classification of abnormalities (abstract system and method using PET imaging as functional imaging device and [0010] for detecting lesions and measuring radiopharmaceutical uptake within selected hotspots representing lesions for quantification, and (abstract) performing segmentation and classification for characterizing the lesion, tumor with estimation of disease severity and risk, wherein one of ordinary skill in the art would recognize that FDG is commonly used for PET imaging as taught by Wang when PET imaging is combined with MRI imaging (Title, abstract, p.2 col.2 ¶ Pre-treatment Imaging)) the system comprising: a PET scanner device configured for collecting volumetric radioactive measurement data associated with an administered radioactive tracer present in a target site of a subject over multiple scanning intervals and generating associated parametric PET maps of the target site ([0014] “using a functional imaging modality (e.g. positron emission tomography (PET)”, “wherein the 3D functional image comprises a plurality of voxels, each representing a particular physical volume within the subject and having an intensity value (e.g., standard uptake value (SUV)) that represents detected radiation emitted from the particular physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within the target tissue region” wherein the radiation is originated from radiopharmaceutical injected within the patient (abstract) wherein the collection is performed as a video stream ([0118]) as over several scanning cycles, wherein the functional image reflects the capability of different tissues to accumulate and process the radiopharmaceutical leading the mapping of hotspots ([0014], [0020], [0035]-[0026])); a magnetic resonance (MR) imaging scanner device configured for capturing a magnetic resonance image of the target site ([0034] “using an anatomical imaging modality … e.g., magnetic resonance imaging (MRI)”..” wherein the 3D anatomical image comprises a graphical representation of tissue … within the subject“ with [0045] “an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject in which the potential cancerous lesion that the hotspot represents is determined”, the MRI image directed to a target site); and a dynamic PET platform (title and abstract AI based analysis system for improving the detection, segmentation and classification of lesions with characterization of the lesions, tumor burden and estimation of disease severity and risk from PET/functional images) comprising: at least one processor ([0034], [0047] processor of a computing device); a memory element ([0047] memory having instructions stored thereon); and a PET processing engine stored in the memory element ([0047] stored instructions including machine learning modules configured to be executed by the processor) and when executed by the at least one processor is configured for performing the method of claim 1 with receiving a plurality of MR images corresponding to a target site of a subject, generating three dimensional (3D) area masks of abnormality volumes from the plurality of MR images, segmenting the 3D area masks into one or more individual seed images for each of the abnormality volumes, overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes, and utilizing the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes wherein as discussed above, claim 1 is taught by Brynolfsson, Wang and Gray. The examiner notes also that Wang teaches the combined FDG-PET and MRI imaging scanners (p.2 col.2 ¶ Pretreatment Imaging) and so do Gray (p.223 col.1 3rd and last ¶ and col.2 1st ¶). Therefore the claim 8 is therefore made obvious by the teachings discussed above mutandis mutatis. Brynolfsson, Wang and Gray teach claim 8. Regarding the dependent claims 9, 11, 13, 14, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Brynolfsson, Wang and Gray. Regarding claim 9, Brynolfsson does not teach the MR images are T1-weighted but Gray teaches the MR images are T1-weighted (p.223 col.1 last ¶) as commonly known for acquiring anatomical/structural images. Regarding claim 11, Brynolfsson teaches also the imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the target site condition assessment includes a tumor progression (TPR) assessment or a treatment related necrosis (TRN) assessment as claimed. Regarding claim 13, Brynolfsson teaches also the use of machine learning module for classifying the targets (abstract with AI-based detection and classification, [0009], [0022]) imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the logistic regression engine is subjected to supervised machine learning (ML) to classify the abnormality volumes as claimed. Regarding claim 14, as discussed above, Gray teaches the co-registration between MRI and FDG-PET images in native MRI image space creating MRI-space FDG-PET images (p.223 col.1 2nd ¶ and col.2 2nd ¶) wherein each of the MRI-space FDG-PET image is overlaid with the corresponding masked anatomical segmentation for defining different regions of interest to assess the cerebral metabolic rate of glucose for the time change of the glucose signal (p.224 col.1 last ¶ to col.2 1st ¶ Regional feature extraction and classification) leading to the classification of the regions of interest (p.224 col.2 2nd-3rd ¶) therefore Gray teaches the concept of overlaying anatomical masks from MRI imaging onto co-registered MRI DFG-PET images to select masked regions for which the change of metabolic rate of glucose is being assessed for later classification of these regions therefore teaching, prior classification, receiving the co-registered parametric PET maps corresponding to the target site of the subject as claimed. Claims 3 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020) in view of Wang et al. (2017 Front. Oncol. 7: article 8 8 pages; Pub.Date 2017) in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012) as applied to claims 1 and 8 and further in view of Mikhno et al. (USPN 20170039706 A1; Pub.Date 02/09/2017; Fil.Date 09/02/2016). Brynolfsson, Wang and Gray teach a method and system as set forth above. Brynolfsson, Wang and Gray do not specifically teach generates a total blood volume (TBV) parameter for each of the abnormality volumes as in claims 3 and 10. However, Mikhno teaches within the same field of endeavor of image reconstruction and analysis (title and abstract) the analysis of PET regions with the determination of the blood volume ([0348] and [0440} for an estimate total blood volume) therefore, since each volume of interest was already separated with overlaid masks as discussed above, Mikhno teaches generates a total blood volume (TBV) parameter for each of the abnormality volumes as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the method and system of Brynolfsson as modified by Wang and Gray such that the method and system further comprise: generates a total blood volume (TBV) parameter for each of the abnormality volumes, since one of ordinary skill in the art would recognize that determining the total blood volume within volume of interest using PET imaging was known in the art as taught by Mikhno. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson and Mikhno teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions as related to the vascularization of the region, as suggested by Mikhno ([0469]). Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020) in view of Wang et al. (2017 Front. Oncol. 7: article 8 8 pages; Pub.Date 2017) in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012) as applied to claims 1 and 8 and further in view of Li et al. (2017 IEEE Nuclear Science Symposium and Medical Imaging Conference IEEE Xplore 3 pages; Pub.Date 2017). Brynolfsson, Wang and Gray teach a one or more non-transitory computer readable media as set forth above. Brynolfsson, Wang and Gray do not specifically teach one or more wavelet transforms are utilized to determine the kinetic rate parameters as in claims 5 and 12. However, Li teaches within the same field of endeavor of imaging and diagnosis tumors using FDG-PET (Title and abstract) the use of wavelet transforms/analysis for analyzing the time course features of the pharmacokinetics of FDG using dynamic PET (abstract, p.1 col.2 last ¶ to p.2 col.1 1st ¶, and Fig.2 with application of wavelet transform WT to FDG uptake from PET images) therefore teaching one or more wavelet transforms are utilized to determine the kinetic rate parameters as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the method and system of Brynolfsson as modified by Wang and Gray such that the method and system further comprise: one or more wavelet transforms are utilized to determine the kinetic rate parameters, since one of ordinary skill in the art would recognize that using wavelet transformation for analyzing the rate of uptake of FDG with the PET imaging data was known in the art as taught by Li. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson and Li teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions for better classification of the lesions, as suggested by Li (abstract). Claims 15, 16, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020 in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012). Regarding independent claim 15, Brynolfsson teaches a memory element being or containing a computer-readable medium (CRM) ([0047] [0223]-[0224] memory/CRM having instructions stored thereon wherein the instructions, when executed by the processor, cause the processor to perform a method) therefore teaching one or more non-transitory computer readable media having stored thereon executable instructions that when executed by a processor of a computer cause the computer to perform steps comprising : receiving a plurality of magnetic resonance (MR) images corresponding to a target site of a subject ([0034], [0075], [0144] with Fig. 1B, receiving/accessing 3D anatomical image of subject targeting cancerous lesions within the subject using anatomical imaging modality such as magnetic resonance imaging, wherein one of ordinary skill in the art would recognize as obvious as the image would refer to a time series of image frames or plurality of conventional images since the term image is interpreted as a video stream according to Brynolfsson [0118]) ; generating three dimensional (3D) area masks of abnormality volumes from the plurality of MR images ([0052] “ the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map” and [0051] the invention is processing the 3D images of functional (PET) and anatomical (MRI) images to provide “a segmentation map that identifies, within the 3D functional image and/or the 3D anatomical image, one or more volumes of interest (VOIs), each VOI corresponding to a particular target tissue region and/or a particular anatomical region”) ; segmenting the 3D area masks into one or more individual seed images for each of the abnormality volumes ([0052] “ the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map” and [0051] the invention is processing the 3D images of functional (PET) and anatomical (MRI) images to provide “a segmentation map that identifies, within the 3D functional image and/or the 3D anatomical image, one or more volumes of interest (VOIs), each VOI corresponding to a particular target tissue region and/or a particular anatomical region”); […overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes…]; and utilizing the kinetic rate parameters to train a logistic regression engine to predict a target site condition assessment based on a classification of the abnormality volumes ([0036]-[0037] using a machine learning module (hotspot classification module) determine the lesion classification for each hotspot to select a subset of one or more hotspot having a high likelihood of corresponding to cancerous lesions, based at least on intensities of hotspots and segmentation map, reading on the accumulation rate of the radiopharmaceutical compounds and anatomical masks discussed above). Brynolfsson does not specifically teach overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes as in claim 15. However, Gray teaches within the same field of endeavor of multimodal imaging FDG-PET and MRI technology (Title, abstract and p.222 col.2 last ¶) the co-registration between MRI and FDG-PET images in native MRI image space creating MRI-space FDG-PET images (p.223 col.1 2nd ¶ and col.2 2nd ¶) wherein each of the MRI-space FDG-PET image is overlaid with the corresponding masked anatomical segmentation for defining different regions of interest to assess the cerebral metabolic rate of glucose for the time change of the glucose signal (p.224 col.1 last ¶ to col.2 1st ¶ Regional feature extraction and classification) leading to the classification of the regions of interest (p.224 col.2 2nd-3rd ¶) therefore Gray teaches the concept of overlaying anatomical masks from MRI imaging onto co-registered MRI DFG-PET images to select masked regions for which the change of metabolic rate of glucose is being assessed for later classification of these regions therefore teaching overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the method of Brynolfsson such that the method further comprises: overlaying the one or more individual seed images onto co-registered parametric PET maps to generate kinetic rate parameters for each of the abnormality volumes, since one of ordinary skill in the art would recognize that overlaying anatomical masks of regions of interest from MRI of the subject on FDG-PET images being co-registered with the MRI images of the patient to access to metabolic rate of glucose within these regions were known in the art as taught by Gray. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions, as suggested by Gray (p.227 col.2 2nd and 4th ¶). Regarding the dependent claims 16, 18, 20, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Brynolfsson and Gray. Regarding claim 16, Brynolfsson does not teach the MR images are T1-weighted but Gray teaches the MR images are T1-weighted (p.223 col.1 last ¶) as commonly known for acquiring anatomical/structural images. Regarding claim 18, Brynolfsson teaches also the imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the target site condition assessment includes a tumor progression (TPR) assessment or a treatment related necrosis (TRN) assessment as claimed. Regarding claim 20, Brynolfsson teaches also the use of machine learning module for classifying the targets (abstract with AI-based detection and classification, [0009], [0022]) imaging analysis directed to tumor/cancerous detection and prognosis or progression (abstract, [0080], [0125]-[0126]) therefore teaching the logistic regression engine is subjected to supervised machine learning (ML) to classify the abnormality volumes as claimed. Claim 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020 in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012) as applied to claim 15 and further in view of Mikhno et al. (USPN 20170039706 A1; Pub.Date 02/09/2017; Fil.Date 09/02/2016). Brynolfsson and Gray teach one or more non-transitory computer readable media as set forth above. Brynolfsson and Gray do not specifically teach generates a total blood volume (TBV) parameter for each of the abnormality volumes as in claims 3 and 10. However, Mikhno teaches within the same field of endeavor of image reconstruction and analysis (title and abstract) the analysis of PET regions with the determination of the blood volume ([0348] and [0440} for an estimate total blood volume) therefore, since each volume of interest was already separated with overlaid masks as discussed above, Mikhno teaches generates a total blood volume (TBV) parameter for each of the abnormality volumes as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the one or more non-transitory computer readable media of Brynolfsson as modified by Gray such that the method and system further comprise: generates a total blood volume (TBV) parameter for each of the abnormality volumes, since one of ordinary skill in the art would recognize that determining the total blood volume within volume of interest using PET imaging was known in the art as taught by Mikhno. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson and Mikhno teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions as related to the vascularization of the region, as suggested by Mikhno ([0469]). Claim 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (USPN 20220005586 A1; Pub.Date 06/06/2022; Fil.Date 08/31/2020) in view of Gray et al. (2012 NeuroImage 60:221–229; Pub.Date 2012) as applied to claim 15 and further in view of Li et al. (2017 IEEE Nuclear Science Symposium and Medical Imaging Conference IEEE Xplore 3 pages; Pub.Date 2017). Brynolfsson and Gray teach a one or more non-transitory computer readable media as set forth above. Brynolfsson and Gray do not specifically teach one or more wavelet transforms are utilized to determine the kinetic rate parameters as in claim 19. However, Li teaches within the same field of endeavor of imaging and diagnosis tumors using FDG-PET (Title and abstract) the use of wavelet transforms/analysis for analyzing the time course features of the pharmacokinetics of FDG using dynamic PET (abstract, p.1 col.2 last ¶ to p.2 col.1 1st ¶, and Fig.2 with application of wavelet transform WT to FDG uptake from PET images) therefore teaching one or more wavelet transforms are utilized to determine the kinetic rate parameters as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the effective filling date of the invention to have modified the method and system of Brynolfsson as modified by Gray such that the method and system further comprise: one or more wavelet transforms are utilized to determine the kinetic rate parameters, since one of ordinary skill in the art would recognize that using wavelet transformation for analyzing the rate of uptake of FDG with the PET imaging data was known in the art as taught by Li. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Gray and Brynolfsson and Li teach the basis of combined MRI/PET imaging technology for combining functional and anatomical imaging for improving diagnosis. The motivation would have been to monitor the biological activity of the regions of interest within the subject in order to predict the nature and physiological state of these regions for better classification of the lesions, as suggested by Li (abstract). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK M MEHL whose telephone number is (571)272-0572. The examiner can normally be reached Monday-Friday 9AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEITH M RAYMOND can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK M MEHL/ Examiner, Art Unit 3798 /KEITH M RAYMOND/ Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Feb 19, 2024
Application Filed
Jun 06, 2025
Non-Final Rejection — §103
Oct 14, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588950
TRAJECTORY PLANNING FOR MINIMALLY INVASIVE THERAPY DELIVERY USING LOCAL MESH GEOMETRY
2y 5m to grant Granted Mar 31, 2026
Patent 12588884
CLOSED-LOOP ELUTION SYSTEM TO EVALUATE PATIENTS WITH SUSPECTED OR EXISTING PERIPHERAL ARTERIAL DISEASE
2y 5m to grant Granted Mar 31, 2026
Patent 12551315
TISSUE MARKING DEVICE AND METHODS OF USE THEREOF
2y 5m to grant Granted Feb 17, 2026
Patent 12551732
METHOD AND APPARATUS FOR REMOVING MICROVESSELS
2y 5m to grant Granted Feb 17, 2026
Patent 12521202
MARKING ELEMENT FOR MARKING TISSUE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
72%
With Interview (+24.8%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month