Prosecution Insights
Last updated: April 19, 2026
Application No. 18/265,601

LESION DIAGNOSIS METHOD

Non-Final OA §101§102§103
Filed
Jun 06, 2023
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Vuno Inc.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 06/06/2023. Claims 1-14 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 06/06/2023, 10/20/2023, 05/08/2024, 08/20/2024, 11/27/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. Step 1, the claims are directed to the statutory categories of a process and machine. Claims 1 and 14: Step 2A Prong 1, Claims 1 and 14 recite, in part detect an object region for lesion diagnosis from the medical data, and to detect at least one finding region related to a specific lesion (Mental processes, observation/evaluation/judgment of where something in the data). calculating a volume and a location for at least one finding region included in the object region (Mathematical concepts, mathematical calculation). Step 2A Prong 2, this judicial exception is not integrated into a practical application. The additional elements: a processor including one or more cores; and a memory (mere instructions to apply the exception using a generic computer component). inputting medical data into a first neural network model and a second neural network model (mere data gathering and recited at a high level of generality, and thus are insignificant extra-solution activity). generating result information for the medical data based on the volume and the location for the finding region (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. The additional elements: a processor including one or more cores; and a memory (mere instructions to apply the exception using a generic computer component). inputting medical data into a first neural network model and a second neural network model (mere data gathering and recited at a high level of generality, and thus are insignificant extra-solution activity). generating result information for the medical data based on the volume and the location for the finding region (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). Claims 2-10 provide further limitations to the abstract idea (Mathematical concepts and/or Mental processes) as rejected in claim 1, however, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (data gathering/insignificant extra-solution activity and/or generic computer component). Claim 11: Step 2A Prong 1, Claim 11 recites, in part calculating a volume and a location for at least one finding region included in an object region, based on the at least one finding region related to a specific lesion and the object region for lesion diagnosis detected from the medical data (Mathematical concepts, mathematical calculations). Step 2A Prong 2, this judicial exception is not integrated into a practical application. The additional elements: a processor including one or more cores; a memory (mere instructions to apply the exception using a generic computer component). an output unit for providing a user interface, wherein the user interface displays result information for medical data in response to medical data input, and wherein the result information for the medical data is generated based on a result (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. The additional elements: a processor including one or more cores; a memory (mere instructions to apply the exception using a generic computer component). an output unit for providing a user interface, wherein the user interface displays result information for medical data in response to medical data input, and wherein the result information for the medical data is generated based on a result (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). Claims 12-13 provide further limitations to the abstract idea (Mathematical concepts) as rejected in claim 1, however, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (data gathering/insignificant extra-solution activity and/or generic computer component). Claim Rejections – 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-2, 7-9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ghesu et al. (U.S. Patent Application Pub. No. US 20220022818 A1) in view of Taerum et al. (U.S. Patent Application Pub. No. US 20200085382 A1). Claim 1: Ghesu teaches a lesion diagnosing method (i.e. provide for the assessment of abnormality patterns associated with COVID-19 from x-ray images using machine learning based segmentation networks to segment the lungs and the abnormality patterns from the x-ray images; para. [0029]), comprising: inputting medical data (i.e. At step 202, an input medical image in a first modality is received; para. [0031]) into a first neural network model (i.e. a trained lung segmentation network, the lung segmentation network is an image-to-image CNN (convolutional neural network), however the lung segmentation network may be any suitable machine learning based network; para. [0034, 0035]) and a second neural network model (i.e. a trained abnormality pattern segmentation network, the abnormality pattern segmentation network is an image-to-image CNN, however the abnormality pattern segmentation network may be any suitable machine learning based network; para. [0036, 0037]) to detect an object region for lesion diagnosis from the medical data (i.e. At step 204, lungs are segmented from the input medical image using a trained lung segmentation network; para. [0034]), and to detect at least one finding region related to a specific lesion (i.e. At step 206, abnormality patterns associated with the disease are segmented from the input medical image using a trained abnormality pattern segmentation network; para. [0036]), these paragraphs describe two trained neural network models on the same input medical data, one produces an object region (lungs) and one to produce a findings/lesion region; calculating and a location for at least one finding region (i.e. location and spread of lesions, the spatial locations of detections and segmentations, determined according to embodiments described herein, can also be used to track expansion or shrinkage of lesions; para. [0045]) included in the object region (i.e. the quantitative metric is a percentage of affected lung area (POa) calculated as the total percent area of the lungs that is affected by the disease, where the area of the abnormality patterns in the lungs is determined as the area of the segmented abnormality patterns and the area of the lungs is determined as the area of the segmented lungs; para. [0039]); and generating result information for the medical data (i.e. At step 210, the assessment of the disease is output. For example, the assessment of the disease can be output by displaying the assessment of the disease on a display device of a computer system; para. [0040]) based on the and the location for the finding region (i.e. evolution or progression of the disease may be predicted. Based on the assessment of the disease and, possibly, the detection of the disease (using a detection network) determined at a plurality of points in time, a wide range of measurements may be extracted, such as, e.g., POa, location and spread of lesions; para. [0038, 0039, 0045]). Ghesu does not explicitly teach calculating a volume. However, Taerum teaches calculating a volume (i.e. The at least one processor may determine the volume of all lesion candidates utilizing the generated segmentations; para. [0033]) and a location (i.e. The relevant lesion information may include a center location for each lesion, and the at least one processor may calculate the center location as the center of mass of the predicted probabilities; para. [0029]) for at least one finding region included in the object region (i.e. cancerous anatomical structures of the lungs do not occur outside of the physical bounds of the lungs; para. [0136]); and generating result information for the medical data based on the volume (i.e. The at least one processor may cause the determined volume of at least one unique cancerous anatomical structure to be displayed on a display; para. [0033]) and the location for the finding region (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. Claim 2: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the detecting of the object region and at least one finding region related to the specific lesion includes: detecting the object region for lesion diagnosis from the medical data by inputting the medical data to the first neural network model (i.e. At step 204, lungs are segmented from the input medical image using a trained lung segmentation network; para. [0034]); and detecting at least one finding region related to the specific lesion from the medical data by inputting the medical data to the second neural network model (i.e. At step 206, abnormality patterns associated with the disease are segmented from the input medical image using a trained abnormality pattern segmentation network; para. [0036]). Claim 7: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the generating of result information for the medical data based on the and the location for the finding region includes: generating result information for the medical data by inputting quantification data corresponding to the and the location for the finding region (i.e. based on the assessment of the disease and, possibly, the detection of the disease (using a detection network) determined at a plurality of points in time, a wide range of measurements may be extracted, such as, e.g., POa, location and spread of lesions; para. [0045]) to a third neural network model (i.e. a machine learning based detection network may additionally be applied for detecting the disease (e.g., COVID-19) in the input medical image. In one embodiment, the detection may be formulated as a mapping from the feature space of the segmentation networks, as well as the lung and abnormality pattern segmentations, to a disease score or probability measure of the disease using an image-wise disease classifier or detector (e.g., bounding boxes). In another embodiments, detection may be performed by regressing using extracted quantitative biomarkers (e.g., percentage of opacity). Additional clinical data may also be input into the detection network. The additional clinical data may include patient data (e.g., demographics), clinical data, genetic data, laboratory data, etc; para. [0043, 0045]). Ghesu does not explicitly teach the volume. However, Taerum further teaches wherein the generating of result information for the medical data based on the volume (i.e. The at least one processor may determine the volume of all lesion candidates utilizing the generated segmentations; para. [0033]) and the location (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]) for the finding region includes (i.e. a CNN can be trained as a binary classifier to classify images of lesions as benign or malignant. The final output of such a network typically has only a single scalar value: the probability that a lesion is malignant, from 0 to 1; para. [0290]): generating result information for the medical data by inputting quantification data corresponding to the volume and the location for the finding region (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]) to a third neural network model (i.e. the trained CNN model 2608 is used along with the lesion data 2612 to calculate the similarity between the query lesion and lesions in the CBIR database lesions at 2618; para. [0297]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. Claim 8: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the generating of result information for the medical data based on the and the location for the finding region includes: classifying a class for the medical data by inputting quantification data corresponding to the and the location for the finding region (i.e. based on the assessment of the disease and, possibly, the detection of the disease (using a detection network) determined at a plurality of points in time, a wide range of measurements may be extracted, such as, e.g., POa, location and spread of lesions; para. [0045]) to a third neural network model (i.e. a machine learning based detection network may additionally be applied for detecting the disease (e.g., COVID-19) in the input medical image. In one embodiment, the detection may be formulated as a mapping from the feature space of the segmentation networks, as well as the lung and abnormality pattern segmentations, to a disease score or probability measure of the disease using an image-wise disease classifier or detector (e.g., bounding boxes). In another embodiments, detection may be performed by regressing using extracted quantitative biomarkers (e.g., percentage of opacity). Additional clinical data may also be input into the detection network. The additional clinical data may include patient data (e.g., demographics), clinical data, genetic data, laboratory data, etc; para. [0043, 0045]). Ghesu does not explicitly teach the volume. However, Taerum further teaches wherein the generating of result information for the medical data based on the volume and the location (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]) for the finding region includes: classifying a class for the medical data by inputting quantification data corresponding to the volume (i.e. The at least one processor may determine the volume of all lesion candidates utilizing the generated segmentations; para. [0033]) and the location for the finding region (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]) to a third neural network model (i.e. the trained CNN model 2608 is used along with the lesion data 2612 to calculate the similarity between the query lesion and lesions in the CBIR database lesions at 2618; para. [0297]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. Claim 9: Ghesu and Taerum teach the lesion diagnosing method of claim 8. Ghesu further teaches wherein the class represents a class of the medical data related to a respiratory disease, and the class includes at least one of: normal, abnormal, a mild case, a severe case, or a low risk group, a medium risk group, a high risk group corresponding to a treatment prognosis, or a type of a respiratory disease (i.e. At step 202, an input medical image in a first modality is received. The input medical image may be of a chest of a patient suspected of, or confirmed as, having a disease. In one embodiment, the disease is a member of the family of coronaviruses. For example, the disease may be COVID-19. As used herein, COVID-19 includes mutations of the COVID-19 virus (which may be referred to by different terms). However, the disease may include any disease with recognizable abnormality patterns in the lungs, such as, e.g., consolidation, interstitial disease, atelectasis, nodules, masses, decreased density or lucencies, etc. For example, the disease may be other types of viral pneumonia (e.g., influenza, adenovirus, respiratory syncytial virus, SARS (severe acute respiratory syndrome), MERS (Middle East respiratory syndrome), etc.), bacterial pneumonia, fungal pneumonia, mycoplasma pneumonia, or other types of pneumonia or other types of diseases; para. [0002, 0031, 0032, 0045]). Claim 14 is similar in scope to Claim 1 and is rejected under a similar rationale. 7. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ghesu in view of Taerum, and further in view of Christ et al. (Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields, arXiv, published 2016, pages 1-8). Claim 3: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the detecting of the object region and at least one finding region related to the specific lesion includes: detecting the object region for lesion diagnosis from the medical data by inputting the medical data to the first neural network model (i.e. At step 204, lungs are segmented from the input medical image using a trained lung segmentation network. The lung segmentation network predicts a probability map representing the segmented lungs. The probability map defines a pixel wise probability that each pixel depicts the lungs. The probability map may be represented as a binary mask by comparing the probability for each pixel to a threshold value (e.g., 0.5). The binary mask assigns each pixel a value of, e.g., 0 where the pixel does not depict the lungs and 1 where the pixel depicts the lungs. In one example, the lung segmentation network is lung segmentation network 110 that generates a predicted 2D probability map 112 which is represented as a binary mask 114 in FIG. 1. Exemplary lung segmentations are shown in FIG. 6, described in further detail below; para. [0034]); and detecting at least one finding region related to the specific lesion from the object region by inputting medical data including the object region detected through the first neural network model to the second neural network model (i.e. fig. 2, At step 206, abnormality patterns associated with the disease are segmented from the input medical image using a trained abnormality pattern segmentation network. The abnormality pattern segmentation network predicts a probability map representing the segmented abnormality pattern. The probability map defines a pixel wise probability that each pixel depicts the abnormality pattern. The probability map may be represented as a binary mask by comparing the probability for each pixel to a threshold value (e.g., 0.5). The binary mask assigns each pixel a value of, e.g., 0 where the pixel does not depict the abnormality pattern and 1 where the pixel depicts the abnormality pattern. In one example, the abnormality pattern segmentation network is lesion segmentation network 104 that generates a predicted 2D probability map 106 which is represented as a binary mask 108 in FIG. 1; para. [0036]). Ghesu does not explicitly teach inputting medical data including the object region detected through the first neural network model to the second neural network model. However, Christ teaches inputting medical data including the object region detected through the first neural network model to the second neural network model (i.e. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1; page 1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Ghesu and Taerum to include the feature of Christ. One would have been motivated to make this modification because it provides more reliable when constrained to an organ ROI. 8. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ghesu in view of Taerum, and further in view of Oosawa (U.S. Patent Application Pub. No. US 20220222917 A1). Claim 4: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the detecting of at least one finding region related to the specific lesion from the medical data includes: detecting a plurality of finding regions for lesions related to a respiratory disease from the medical data (i.e. At step 202, an input medical image in a first modality is received. The input medical image may be of a chest of a patient suspected of, or confirmed as, having a disease. In one embodiment, the disease is a member of the family of coronaviruses. For example, the disease may be COVID-19. As used herein, COVID-19 includes mutations of the COVID-19 virus (which may be referred to by different terms). However, the disease may include any disease with recognizable abnormality patterns in the lungs, such as, e.g., consolidation, interstitial disease, atelectasis, nodules, masses, decreased density or lucencies, etc. For example, the disease may be other types of viral pneumonia (e.g., influenza, adenovirus, respiratory syncytial virus, SARS (severe acute respiratory syndrome), MERS (Middle East respiratory syndrome), etc.), bacterial pneumonia, fungal pneumonia, mycoplasma pneumonia, or other types of pneumonia or other types of diseases; para. [0031]), and the plurality of finding regions includes: a first finding region corresponding to ground glass opacity (GGO), a second finding region corresponding to consolidation (i.e. the disease is COVID-19 (coronavirus disease 2019) and the abnormality patterns include at least one of GGO (ground glass opacity), consolidation; para. [0006]), a third finding region corresponding to, a fourth finding region corresponding to pleural effusion (i.e. the abnormality patterns may include opacities such as, e.g., GGO (ground glass opacity), consolidation, crazy-paving pattern, atelectasis, interlobular septal thickening, pleural effusions, bronchiectasis, halo signs, etc; para. [0032]), and a fifth finding region corresponding to (i.e. the disease may include any disease with recognizable abnormality patterns in the lungs, such as, e.g., consolidation, interstitial disease, atelectasis, nodules, masses, decreased density or lucencies; para. [0031]). Ghesu does not explicitly teach reticular opacity and emphysema. However, Oosawa teaches reticular opacity and emphysema (i.e. In the present embodiment, the multi-layer neural network 40 learns to classify each pixel of the lung field regions H1 and H2 into any one of 33 types of properties, such as normal lung, GGO mass nodule opacity, mixed mass nodule opacity, solid mass nodule opacity, ground glass opacity, pale ground glass opacity, centrilobular ground glass opacity, consolidation, low density, centrilobular emphysema, panlobular emphysema, normal pulmonary emphysema tendency, cyst, tree-in-bud (TM), small nodule (non-centrilobular), centrilobular small nodule opacity, interlobular septal thickening, bronchial wall thickening, bronchiectasis, bronchioloectasis, air bronchogram, traction bronchiectasis, cavity consolidation, cavernous tumor, reticular opacity, fine reticular opacity, honeycomb lung, pleural effusion, pleural thickening, chest wall, heart, diaphragm, and blood vessel; para. [0055]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Ghesu and Taerum to include the feature of Oosawa. One would have been motivated to make this modification because it improves diagnostic reporting and assessment by providing specific, separately identifiable finding regions. 9. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Ghesu in view of Taerum, and further in view of Sprencz et al. (U.S. Patent Application Pub. No. US 20150063667 A1). Claim 5: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein the calculating of the and the location for at least one finding region included in the object region includes: calculating a for the object region (i.e. where the area of the abnormality patterns in the lungs is determined as the area of the segmented abnormality patterns and the area of the lungs is determined as the area of the segmented lungs; para. [0020, 0039]); calculating a and a location for a finding region included in the object region (i.e. where the area of the abnormality patterns in the lungs is determined as the area of the segmented abnormality patterns and the area of the lungs is determined as the area of the segmented lungs; para. [0020, 0038, 0039, 0045]); and calculating a relative of the finding region to the object region (i.e. the quantitative metric is a percentage of affected lung area (POa) calculated as the total percent area of the lungs that is affected by the disease, as defined in Equation (1). Poa; para. [0039]). Ghesu does not explicitly teach calculating a volume and ratio. However, Taerum further teaches wherein the calculating of the volume and the location for at least one finding region included in the object region includes: calculating a volume for the object region (i.e. The system can automatically measure the volume of the liver, as well as the volume of the lesions that were detected either automatically or manually; para. [0340, 0388]); calculating a volume (i.e. The system can automatically measure the volume of the liver, as well as the volume of the lesions that were detected either automatically or manually; para. [0340, 0388]) and a location for a finding region included in the object region (i.e. The centroid of each connected prediction is defined to be the center of mass of predicted probabilities, the center of the binarized mask, the center of the circumscribing bounding box, or the random location within the segmentation, among other options; para. [0159]); and calculating a relative volume of the finding region to the object region (i.e. The system can automatically measure the volume of the liver, as well as the volume of the lesions that were detected either automatically or manually; para. [0340, 0388]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. However, Sprencz teaches calculating a volume for the object region (i.e. calculating a skeletal volume from the subset of the anatomical image dataset; para. [0010, 00054]); calculating a volume (i.e. The total bone lesion volume is a quantitative value calculated based on a total bone volume of the lesion candidates classified as bone lesions; para. [0054]) and a location for a finding region included in the object region (i.e. identify a location of a lesion candidate; para. [0011, 0058]); and calculating a relative volume ratio of the finding region to the object region (i.e. the bone lesion index is calculated as a ratio of the total bone lesion volume to the total skeletal volume and is represented as a percentage; para. [0054]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Ghesu and Taerum to include the feature of Sprencz. One would have been motivated to make this modification because it provides more accurate and clinically meaningful severity measure than 2D area metrics. Claim 6: Ghesu, Taerum, and Sprencz teach the lesion diagnosing method of claim 5. Ghesu further teaches comprising: when there is a plurality of finding regions, calculating a total for the plurality of finding regions and a relative total of the plurality of finding regions with respect to the object region (i.e. the quantitative metric is a percentage of affected lung area (POa) calculated as the total percent area of the lungs that is affected by the disease, as defined in Equation (1). Poa; para. [0036, 0039]). Ghesu does not explicitly teach calculating a total volume and total ratio. However, Taerum further teaches when there is a plurality of finding regions, calculating a total volume for the plurality of finding regions and a relative total volume of the plurality of finding regions with respect to the object region (i.e. The at least one processor may determine the volume of all lesion candidates utilizing the generated segmentations; para. [0033, 0363]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. However, Sprencz further teaches when there is a plurality of finding regions (i.e. quantitative information regarding the skeletal structure and detected bone lesions may be in regard to individual bone lesions or the total of all bone lesions; para. [0053]), calculating a total volume for the plurality of finding regions (i.e. calculates a patient skeletal metric that represents a total skeletal volume of the patient, a bone lesion metric that represents a total bone lesion volume of the patient; para. [0054]) and a relative total volume ratio of the plurality of finding regions with respect to the object region (i.e. The total bone lesion volume is a quantitative value calculated based on a total bone volume of the lesion candidates classified as bone lesions by lesion detection subroutine 246. According to one embodiment, the bone lesion index is calculated as a ratio of the total bone lesion volume to the total skeletal volume and is represented as a percentage; para. [0054]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Ghesu and Taerum to include the feature of Sprencz. One would have been motivated to make this modification because it provides more accurate and clinically meaningful severity measure than 2D area metrics. 10. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ghesu in view of Taerum, and further in view of Maier et al. (U.S. Patent Application Pub. No. US 20160203263 A1). Claim 10: Ghesu and Taerum teach the lesion diagnosing method of claim 1. Ghesu further teaches wherein in the generating of result information for the medical data based on the and the location for the finding region includes: calculating a respiratory disease prediction probability score included in the medical data based on a location (i.e. the detection may be formulated as a mapping from the feature space of the segmentation networks, as well as the lung and abnormality pattern segmentations, to a disease score or probability measure of the disease using an image-wise disease classifier or detector (e.g., bounding boxes); para. [0043, 0045]), an absolute, and a relative of each finding region for a lung, and a location, an absolute, and a relative of each finding region for each lung lobe, when there is a plurality of finding regions (i.e. the quantitative metric is a percentage of affected lung area (POa) calculated as the total percent area of the lungs that is affected by the disease, as defined in Equation (1), where the area of the abnormality patterns in the lungs is determined as the area of the segmented abnormality patterns and the area of the lungs is determined as the area of the segmented lungs. The quantitative metric may be any other metric suitable for quantifying the disease, such as, e.g., a LSS (lung severity score) calculated, for each lobe of the lungs, as the total percent area of a lobe that is affected by the disease; para. [0036, 0038, 0039, 0042]), and wherein each finding region is any one of: a first finding region corresponding to ground glass opacity (GGO), a second finding region corresponding to consolidation, a third finding region corresponding to reticular opacity, a fourth finding region corresponding to pleural effusion, and a fifth finding region corresponding to emphysema (i.e. the disease is COVID-19 (coronavirus disease 2019) and the abnormality patterns include at least one of GGO (ground glass opacity), consolidation, and crazy-paving pattern; para. [0006]). Ghesu does not explicitly teach the volume, an absolute volume, and a relative volume, lung lobe volume. However, Taerum further teaches wherein in the generating of result information for the medical data based on the volume and the location for the finding region (i.e. The segmented lesion candidates may be predicted in 2D, and the at least one processor may stack the segmented lesion candidates to create a 3D prediction volume; and combine the segmented lesion candidates in 3D utilizing 6, 18, or 26-connectivity of the 3D prediction volume. The relevant lesion information may include a center location for each lesion, and the at least one processor may calculate the center location as the center of mass of the predicted probabilities; and implement a proposal network that generates the predicted probabilities; para. [0029]) includes: calculating a respiratory disease prediction probability score (i.e. The at least one processor may, for each image of the image data, set the class of each pixel to a foreground cancerous anatomical structure class when the cancerous class probability for the pixel is at or above a determined threshold, and set the class of each pixel to a background class when the cancerous class probability for the pixel is below a determined threshold; and store the set classes as a label map in the at least one nontransitory processor-readable storage medium; para. [0032]) included in the medical data based on a location, an absolute volume, and a relative volume of each finding region for a lung volume, and a location, an absolute volume, and a relative volume of each finding region for each lung volume, when there is a plurality of finding regions (i.e. The at least one processor may determine the volume of all lesion candidates utilizing the generated segmentations; para. [0033]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ghesu to include the feature of Taerum. One would have been motivated to make this modification because it provides a predictable improvement in the reported diagnostic results by quantitatively characterizing the detected findings. However, Maier teaches calculating a respiratory disease prediction probability score included in the medical data based on a location, an volume, and a relative volume of each finding region for a lung volume, and a location, an absolute volume, and a relative volume of each finding region for each lung lobe volume, when there is a plurality of finding regions (i.e. One example of such an imaging biomarker could be the relative volume of low-density tissue in the upper lobes of the lungs on a patient's CT images. From an analysis of comparison images, such as previously obtained CT images of the lungs from other individuals for whom the corresponding health status and/or outcomes are known, it may be determined with high statistical significance that the prevalence of lung cancer is five times (5×) higher in patients who have more than 10% relative volume of low density tissue in the upper lobes of their lungs compared to patients with no low density tissue (i.e. normal patients); para. [0021, 0031]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Ghesu and Taerum to include the feature of Maier. One would have been motivated to make this modification because it provides an improvement severity scoring. Claim Rejections - 35 USC § 102 11. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 12. Claims 11-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Taerum et al. (U.S. Patent Application Pub. No. US 20200085382 A1). Claim 11: Taerum teaches a user terminal for lesion diagnosis (i.e. methods and articles that provide users with a case-specific graphical user interface (GUI) and workflow to assist physicians in screening for, measuring and tracking specific conditions; para. [0356]), comprising: a processor including one or more cores (i.e. The processor-based device 6204 may include one or more processors 6206; para. [0495]); a memory (i.e. the system memory 6208; para. [0495]); and an output unit for providing a user interface (i.e. The at least one processor may cause a display to present the segmentations to a user as a mask or contours; and implement a tool that is controllable via a cursor and at least one button, in operation, the tool edits the segmentations via addition or subtraction; para. [0034, 0500]), wherein the user interface displays result information for medical data in response to medical data input (i.e. the process 400 begins at 402 when a study or multiple studies are uploaded. The process 400 takes a study and generates lesion proposals at 404. From these proposals, lesion candidates are determined at 406 and classified as either a true positive (True) or false positive (False) at 408. Note that (404, 406) is described in further detail in FIG. 11. At 410, the system determines the classification of each module. For each lesion candidate, if the classification determined at 410 to be negative, it is not considered any further at 412. If the classification is positive, the lesion is segmented at 414. If there are further studies that have not been processed, which is determined at 416, steps 402-414 are repeated. If there are not any further studies to be processed, it is assessed whether there are multiple studies at 418. If there are not, the results are displayed at 424 on a display of the system. If there are multiple studies, they are co-registered at 420, and lesion candidates between each scan are longitudinally identified at 422, at which point the results are displayed at 424; para. [0111]), and wherein the result information for the medical data is generated (i.e. displays the total volume of the segmented voxels and other measurements of the segmentation's physical extent; para. [0340]) based on a result of calculating a volume (i.e. Volumes may be calculated by counting the voxels within the 3D mask and multiplying by the volume of each voxel in mL or mm; para. [0033, 0139]) and a location for at least one finding region included in an object region (i.e. Those labels may take on many forms, depending on the specific CNN implementation, including but not limited to: Lesion diagnosis (e.g., malignancy, type of malignant lesion, overall type of lesion including benign and malignant lesions); lesion characteristics (e.g., size, shape, margin, opacity, heterogeneity); characteristics of the tissue surrounding the lesion; location of the lesion within the body; para. [0300]), based on the at least one finding region related to a specific lesion and the object region for lesion diagnosis detected from the medical data (i.e. two unique CNNs are joined end to end; the first CNN proposes locations of potential lesions with a focus on high sensitivity, and the second CNN sorts through these proposed lesions and discards results determined to be false positives … This CNN model evaluates image patches centered on the localized lesion locations 5406 and calculates the segmentation of the lesion represented in the image data; para. [0410, 0411]). Claim 12: Taerum teaches the user terminal for lesion diagnosis of claim 11. Taerum further teaches wherein the result information for the medical data includes at least one of: summary information for the object region for lesion diagnosis and the finding region included in the object region, prediction probability information for respiratory disease, and a distribution image of the finding region included in the object region for lesion diagnosis (i.e. The system can automatically report findings and their characterizations based on standard reporting templates and inputs created by both automated systems or users … the automatic report can also be a graphical report containing tables and images that describe the evolution of the findings over time. FIG. 53 is a GUI 5300 that shows an excerpt of an automated report that collects all characteristics of each finding; para. [0338, 0344, 0290, 0398, 0399]). Claim 13: Taerum teaches the user terminal for lesion diagnosis of claim 11. Taerum further teaches wherein the user interface displays result information for the medical data in response to a user input (i.e. FIG. 54 is a flow diagram of a process 5400 of operating a processor-based system to store information about a pre-localized region of interest in image data and to reveal such information upon user interaction, according to one illustrated implementation; para. [0410, 0415]), and the result information for the medical data is extracted from a database (i.e. The presence of the lesion is the database is assessed at 5426; para. [0416]) in which result information generated based on the volume and the location for at least one finding region included in the object region is stored (i.e. The segmentations are stored at 5412 in a database at 5420 … his metadata can include, but is not limited to, the features of the lesion, including one or more of size, shape, margin, opacity, or heterogeneity, the location of the lesion within the body … The classifications are stored at 5418 in a database at 5420. In at least one implementation, the metadata arrays are stored with a key that is a concatenation of the series unique identifier and lesion world center location in x, y, and z, but other keys, such as those that also utilize the study unique identifier or lesion position in pixel space, may also be used; para. [0139, 0412-0414]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Wang et al. (Pub. No. US 11076824 B1), FIG. 7, at step S710, method 700 may also include providing a diagnostic output based on the processing of the 3D lung image and the determined pixel level lesion mask. In some embodiments, the diagnostic output may further include the output of network 800, such as the 3D pixel level lesion mask, the lesion related information (e.g., location, quantity, and volume/size). In some embodiments, the diagnostic output may additionally include the input of the medical data, such as 3D lung image, the 3D lung ROI, the disease type, and the like. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jun 06, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month