Prosecution Insights
Last updated: April 19, 2026
Application No. 18/011,888

Determining Chest Conditions from Radiograph Data via Machine Learning

Final Rejection §103
Filed
Dec 21, 2022
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 8, filed 10/7/2025, with respect to the specification objection have been fully considered and are persuasive. The objection of the specification has been withdrawn. Applicant’s arguments, see page 8, filed 10/7/2025, with respect to the claim objection have been fully considered and are persuasive. The objection of the claims has been withdrawn. Applicant’s arguments with respect to claim(s) 1-8 have been considered but are moot because the new ground of rejection does not rely on all references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The arguments state that the applied prior art does not perform the amended features of the independent claim. This deficiency is cured by the Slomka reference and will be described below. Regarding the Slomka reference, the reference teaches separate models used to perform segmentation of the x-ray data and producing an attention mask from the x-ray data. The different models output different information that are combined into data that is used for scoring that indicates the risk of a disease or lesion being present within an output image, These details are taught in ¶ in ¶ [124], [125], [136] and [141]-[144]. Thus, this reference in combination with the primary reference performs the claimed features of “wherein the machine-learned pathology model generates feature maps based on the segmented portion of the chest x-ray data generated with a segmentation model, wherein the machine- learned pathology model generates an attention mask based on processing the patient data with a detection model, and wherein the risk data is generated based on the feature maps and the attention mask”. Therefore, based on the above, the features of the claims are disclosed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2 and 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al. (IDS document 8, “A New Resource on artificial intelligence powered computer automated detection software products for tuberculosis programmes and implementers”, Received Date: 9/2/2020) in view of Slomka (US Pub 20250111516 (Prov. App. Filing Date: 3/19/2021)). Re claim 1: (Original) Qin et al. discloses a computer-implemented method, the computer-implemented method comprising: receiving, by a computing system comprising one or more processors, patient data comprising a chest x-ray of a patient (e.g. a chest x-ray (CXR) can be input into systems in order to have AI recognize abnormalities within the CXR, which is taught on pages 1, col. 2, and page 2 col. 1, ll. 14 and col. 2 on page 2 third paragraph discussing “Almost all CAD products”.); processing, by the computing system, the patient data with a machine-learned pathology model to generate risk data predicting a risk that the patient has a chest condition (e.g. the respective systems outputs risk data on a scale that predicts the rate of abnormality for TB, which is taught on page 2 in the paragraph discussing the output with the phrase “The output of a CAD”… The several different systems use a machine learning model that is deep learning to output the TB risk data, which is taught on page 4 table 1 (continued).); wherein the machine-learned pathology model is trained to segment chest x-ray data and generate the risk data based at least in part on a segmented portion of the chest x-ray data (e.g. the machine learning model is trained to segment the CXR to indicate a segmented area, or using bounding boxes, to indicate an abnormality. The specific abnormality within the bounding box or specific area is scored for the probability score, which is taught in Table 1 (continued) on page for regarding the LUNIT system and other systems.); and providing, by the computing system, the risk data as an output (e.g. the heatmap or bounding box areas surrounding the abnormality areas are output for display, which is discussed on page 2 in the paragraph starting with the phrase “The output of a CAD … “. Table 1 on page 4 discloses the report and output of the models.). However, Qin et al fails to specifically teach the features of wherein the machine-learned pathology model generates feature maps based on the segmented portion of the chest x-ray data generated with a segmentation model, wherein the machine- learned pathology model generates an attention mask based on processing the patient data with a detection model, and wherein the risk data is generated based on the feature maps and the attention mask. However, this is well known in the art as evidenced by Slomka. Similar to the primary reference, Slomka discloses a neural network that identifies a lung lesion (same field of endeavor or reasonably pertinent to the problem). Slomka discloses wherein the machine-learned pathology model generates feature maps based on the segmented portion of the chest x-ray data generated with a segmentation model, wherein the machine- learned pathology model generates an attention mask based on processing the patient data with a detection model, and wherein the risk data is generated based on the feature maps and the attention mask (e.g. the invention discloses a convolutional LSTM that contains an attention branch that generates an attention mask based on the sequential input of image slices. The attention map or mask is output and is combined with segmentation data from a Densenet. The output combined can be used to for am output of a score that indicates the severity of a disease or lesion. This is explained in ¶ [124], [125], [136] and [141]-[144].). [0124] At block 210, a quantitative score is generated using the plurality of output target masks. Generating a quantitative score can include generating one or more measurements using the plurality of output target masks. For example, measurements that can be generated using one or more of the plurality of output target masks include three-dimensional volumes of the target, two-dimensional areas of the target, one-dimensional lengths of the target, intensities associated with the target (e.g., maximum brightness or average brightness of all voxels within a target region), and other such measurements. These measurements can be compared with other measurements, such as volumes, areas, targets, or intensities of regions that are not the target region. [0125] In some cases, the plurality of output target masks can be used to generate non-measurement scores, such as a score in a range of zero to five that indicates the severity of a condition associated with the target. [0136] In some cases, at block 220, a display can be generated using the plurality of output target masks. The display can be any visual indication of information associated with the plurality of output target masks. In an example, the plurality of output target masks from block 208 can be applied to the medical imaging data from block 202 to generate a two-dimensional or three-dimensional image of the subject of the medical imaging data (e.g., a lung or a heart), with the target regions (e.g., lesions) highlighted or otherwise presented in a visually distinguishable manner. Generating the display at block 220 can include generating the display and presenting the display using a display device (e.g., a computer monitor). In some cases, generating the display at block 220 can include printing the display on a medium, such as by creating a two-dimensional or three-dimensional print of the subject with the target regions presented in a distinguishable fashion (e.g., visually distinguishable by color or pattern, or tactilely distinguishable by a surface pattern or material change). In some cases, generating the display at block 220 can further include generating the display using the quantitative score from block 210. For example, the display can include the quantitative score. [0141] Each segmentation head 312, 320 comprises three blocks: the first two blocks each include a 3×3 convolutional layer followed by a batch normalization layer and a LeakyRelu activation layer, while the final block includes a 1×1 convolutional layer. The attention head 314 is identical to the segmentation head 312 in structure, with the only difference being the last block is followed by an additional Sigmoid layer. [0142] The main branch 308 receives a single image slice 308 as input. The attention branch 302 receives as input multiple image slices, which can be multiple sequential image slices 306. The multiple sequential image slices 306 received by the attention branch 302 generally include the single image slice 308 and one or more adjacent slices. In some cases, the attention branch 302 makes use of the single image slice 308, an immediately previous image slice, and an immediately subsequent image slice. Thus, the ConvLSTM block 310 processes sequentially a total of three consecutive slices and conveys its output to the attention head 314 and segmentation head 312 of the attention branch 302. [0143] The output of the attention head 314 can then be multiplied with the respective semantic feature maps of the two segmentation heads 312, 320, and then combined to achieve the output target mask 322. In other words, S.sub.out=αS.sub.main+(1−α)S.sub.attn, where S.sub.out is the output target mask 322, S.sub.main is the output feature map of the segmentation head 320 of the main branch 304, S.sub.attn is the output feature map of the segmentation head 312 of the attention branch 302, and α is the output of the attention head. [0144] In some cases, the attention head 314 can output an attention map 316. Therefore, in view of Slomka, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the machine-learned pathology model generates feature maps based on the segmented portion of the chest x-ray data generated with a segmentation model, wherein the machine- learned pathology model generates an attention mask based on processing the patient data with a detection model, and wherein the risk data is generated based on the feature maps and the attention mask, incorporated in the device of Qin et al, in order to segment and quantify medical imaging data using machine learning models, which can save on computational resources while giving faster and more accurate results (as stated in Slomka ¶ [43]). Re claim 2: (Currently Amended) The computer-implemented method of any preceding claim 1, wherein processing, by the computing system, the patient data with the machine-learned pathology model to generate the risk data comprises: processing, by the computing system, the patient data with a segmentation model to generate pixel data descriptive of identified pixels found corresponding to a patient's lungs (e.g. a heat map, which is a digital image comprised of pixels, is generated as an output and areas within the heat map can be indicated for abnormality. This is discussed in page 2 in the paragraph starting with the phrase “The output of a CAD … “ and on page 4 in table 1 (continued).); processing, by the computing system, the patient data with a detection model to generate detection data descriptive of regions of the chest x-ray with detected features (e.g. the heat maps generated can be further developed to show areas within the CXR that contain abnormality features, which is taught in page 2 in the paragraph starting with the phrase “The output of a CAD … “ and on page 4 in table 1 (continued).); and generating, by the computing system, the risk data with a classification model based at least in part on the pixel data and the detection data (e.g. the system utilizes the CRX pixel information along with the detected abnormality of the CRX in order to generate the abnormality scores, which is taught on page 4 Table 1 (continued) in relation to the LUNIT system and others.). Re claim 5: (Currently Amended) Qin et al. discloses the computer-implemented method of any preceding claim 1, wherein the machine-learned pathology model comprises a deep learning system architecture trained on a plurality of training examples from a plurality of patients from a plurality of countries (e.g. the systems use a deep learning architecture with the use of training data from a plurality of countries, which is taught on pages 4 and 5 in Table 1.). Re claim 6: (Currently Amended) Qin et al. discloses the computer-implemented method of any preceding claim 1, further comprising: determining, by the computing system, a follow-up action based at least in part on the risk data (e.g. based on the areas found to be apart of the abnormality areas, clinicians are alerted to the condition or are patients are prioritized in the review order, which is taught on page 3, Table 1.). Re claim 7: (Currently Amended) Qin et al. discloses the computer-implemented method of any preceding claim 1, wherein processing, by the computing system, the patient data with the machine-learned pathology model further comprises: determining, by the computing system, attributions in the patient data (e.g. the system determines different areas that show an abnormality in the CXR, which is taught in page 2 in the paragraph starting with the phrase “The output of a CAD … “ and on page 4 in table 1 (continued).); and overlaying, by the computing system, the attributions on the chest x-ray of the patient to provide visual cues of suspicious areas (e.g. some systems include bounding boxes around abnormality areas that rely a high risk of TB, which is taught on page 4, Table 1 (continued).). Re claim 8: (Currently Amended) Qin et al. discloses the computer-implemented method of any preceding claim 1, wherein the risk data is indicative of at least one of positive, negative, or non-tuberculosis abnormality (e.g. some systems include positive indications of TB after analyzing the CXR, which is taught on page 3 and 4 Table 1.). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al., as modified by the features of Slomka, as applied to claim 1 above, and further in view of Joo Er et al. (“Attention pooling-based convolutional neural network for sentence modeling”, Date: 5/27/2016). Re claim 3: (Currently Amended) However, Qin et al. fails to specifically teach the features of the computer-implemented method of any preceding claim 1, wherein the machine-learned pathology model comprises an attention pooling sub-block. However, this is well known in the art as evidenced by Joo Er et al.. Similar to the primary reference, Joo Er et al. discloses a CNN including attention pooling (same field of endeavor or reasonably pertinent to the problem). Joo er et al. discloses wherein the machine-learned pathology model comprises an attention pooling sub-block (e.g. the reference discloses including an attention pooling block within the CNN to retain the most significant information in an evaluated image, which is taught on page 1 in the Abstract.). Therefore, in view of Joo Er et al., it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the machine-learned pathology model comprises an attention pooling sub-block, incorporated in the device of Qin et al., in order to include an attention pooling sub-block within a machine learning model, which can retain the most significant part of the image improving the image evaluation (as stated in Joo Er et al. Abstract). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al., as modified by the features of Slomka, as applied to claim 1 above, and further in view of Murphy et al. (“Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system”, Date: 3/8/2019). Re claim 4: (Currently Amended) However, Qin et al.. fails to specifically teach the features of the computer-implemented method of any preceding claim 1, wherein the machine-learned pathology model is trained to have at least 90 percent sensitivity and at least 70 percent specificity. However, this is well known in the art as evidenced by Murphy. Similar to the primary reference, Murphy discloses CAD detection of TB (same field of endeavor or reasonably pertinent to the problem). Murphy et al. discloses wherein the machine-learned pathology model is trained to have at least 90 percent sensitivity and at least 70 percent specificity (e.g. using the CAD4TB v6 system, the sensitivity can be set to 90 or more and the specificity is 70 or greater, which is taught in the Abstract on page 1.). Therefore, in view of Murphy et al., it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the machine-learned pathology model is trained to have at least 90 percent sensitivity and at least 70 percent specificity, incorporated in the device of Qin et al., in order to perform system performance with at least a minimum specificity and sensitivity, which allows for a more cost effective and efficient analysis of CVR (as stated in Murphy et al. page 1 Abstract). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Putha discloses detection TB. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAD DICKERSON/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Dec 21, 2022
Response after Non-Final Action
Jul 12, 2025
Non-Final Rejection — §103
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Examiner Interview Summary
Oct 07, 2025
Response Filed
Dec 14, 2025
Final Rejection — §103
Jan 14, 2026
Examiner Interview Summary
Jan 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month