Prosecution Insights
Last updated: April 19, 2026
Application No. 18/362,224

EXPLAINABLE CONFIDENCE ESTIMATION FOR LANDMARK LOCALIZATION

Non-Final OA §103
Filed
Jul 31, 2023
Examiner
ZUBERI, MOHAMMED H
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
GE Precision Healthcare LLC
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
306 granted / 437 resolved
+15.0% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
460
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to patent application as filed on 7/31/2023 This action is made Non-Final. Claims 1 – 20 are pending in the case. Claims 1, 9, and 17 are independent claims. Drawings The drawings filed on 7/31/2023 have been accepted by the Examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wels (USPAT 9,113,781 B2) in view of Murthy (USPUB 20180247107 A1) in further view of Contryman (USPAT 12,056,771 B1). Claim 1: Wels teaches A system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components comprise: an access component that accesses a three-dimensional voxel array captured by a medical imaging scanner (Col 3 ln 1-26: Embodiments of the present invention work on a database of 3D medical images and allow end users to set up and tune detector models for detecting various landmarks in the 3D medical images); an execution component that localizes, via execution of a first deep learning neural network, a set of anatomical landmarks depicted in the three-dimensional voxel array (Col 3 ln 1-26: Embodiments of the present invention provide a system which updates current landmark detection models based on user feedback, in the background without being noticed by the user while the system is in use. Embodiments of the present invention employ on-site landmark detection models, for which the associated data is defined locally at the site of the user and machine learning for model generation and updates are performed locally, as well); a confidence component that generates a multi-tiered confidence score collection based on the set of anatomical landmarks and based on a localization training dataset on which the first deep learning neural network was trained (Col 7 ln 44-60: The machine learning module 204 then updates the landmark detection module corresponding stored in the detector model database 210 based on the new landmark annotation stored in the landmark annotation database 208. The machine learning module can also update a confidence associated with the landmark detection model corresponding to the landmark. If no detection model exists for the landmark in the detector model database 210, the machine learning module 204 can generate an initial landmark detection model based in part on the new landmark annotation and store the landmark detection model in the detector model database 210. In an advantageous implementation, the above describe operations for updating the landmark detection model corresponding to the landmark are performed by the machine learning module 204 as background operations while the end user can continue to interact with the interactive 3D medical image viewer). Wels, by itself, does not seem to completely teach and a classifier component that, in response to one or more confidence scores from the multi-tiered confidence score collection failing to satisfy a threshold, generates, via execution of a second deep learning neural network, a classification label for the one or more confidence scores, wherein the classification label indicates an explanatory factor for why the one or more confidence scores failed to satisfy the threshold. The Examiner maintains that these features were previously well-known as taught by Murthy and Contryman. Murthy teaches and a classifier component that, in response to one or more confidence scores from the multi-tiered confidence score collection failing to satisfy a threshold, generates, via execution of a second deep learning neural network, a classification label for the one or more confidence scores (0006, Claims 1 and 6: In response to a determination that the confidence score for the endoscopic image is not higher than the learned confidence threshold, the endoscopic image is classified with a first specialized network classifier... in response to a determination that the confidence score for the endoscopic image is not higher than the learned confidence threshold, classifying the endoscopic image with a first specialized network classifier... in response to classifying the endoscopic image with the first specialized network classifier: comparing a second confidence value determined for the endoscopic image by the first specialized network classifier to a second learned confidence threshold; in response to a determination that the second confidence score for the endoscopic image is higher than the second learned confidence threshold, outputting the classification of the endoscopic image by the first specialized network classifier; and in response to a determination that the second confidence score for the endoscopic image is not higher than the second learned confidence threshold, classifying the endoscopic image with a second specialized network classifier built on a feature space of the first specialized network classifier). Wels and Murthy are analogous art because they are from the same problem-solving area, classification of objects in medical images. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wels and Murthy before him or her, to combine the teachings of Wels and Murthy. The rationale for doing so would have been to further train the neural network to refine the classification process. Therefore, it would have been obvious to combine Wels and Murthy to obtain the invention as specified in the instant claim(s). Wels and Murthy do not seem to completely teach wherein the classification label indicates an explanatory factor for why the one or more confidence scores failed to satisfy the threshold. The Examiner maintains that these features were previously well-known as taught by Contryman. Contryman teaches wherein the classification label indicates an explanatory factor for why the one or more confidence scores failed to satisfy the threshold (Col 13 ln 24-49: Decision output builder 236 may then generate a decision result, including the confidence of the decision, the verbose decision explanation, and feature relevance based on feature importance scoring, as discussed above. The decision graph builder 236 may then transmit the decision result with additional information, including the original document image 204 to an organization system 250, a document capture system 252, or a combination of systems. The decision includes the explainability data discussed herein (e.g., a decision with associated confidence, feature importance with feature importance scoring indicating the most relevant features to the decision, a verbose explanation, such as a decision explainability graph, indicating how a decision was reached for a given input, and the original document image or a link to the original document image)... as discussed above, for low confidence decisions that do not satisfy a second confidence threshold value, the decision, low confidence result, document image 238, and the explainability factors (e.g., verbose explanation, feature importance, a rendered decision explainability graph, etc.) are transmitted to an underwriter in a message, a graphical user interface, etc. associated with the organization system 250. The underwriter may review the received information and input a decision by responding to the message, inputting the revised decision in the graphical user interface, changing the decision confidence score, etc). Wels and Contryman are analogous art because they are from the same problem-solving area, automatic analysis of an image. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wels and Contryman before him or her, to combine the teachings of Wels and Contryman. The rationale for doing so would have been to provide details that will allow for further refinement of the classification process. Therefore, it would have been obvious to combine Wels and Contryman to obtain the invention as specified in the instant claim(s). Claim 9: Claim 9 recites a computer-implemented method comprising: accessing, by a device operatively coupled to a processor, for completing the steps recited in claim 1 and therefore is rejected over Wels, Murthy and Contryman using the same rationale used above in the rejection of claim 1. Claim 17: Claim 17 recites a computer program product for facilitating explainable confidence estimation for landmark localization, the computer program product comprising a computer readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to complete the steps recited in claim 1. Wels teaches a computer program product (claim 14) therefore, claim 17 is rejected over Wels, Murthy and Contryman using the same rationale used above in the rejection of claim 1. Allowable Subject Matter Claims 2-8, 10-16, and 18-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 2 recites a first tier of the multi-tiered confidence score collection comprises landmark-wise confidence scores respectively corresponding to individual ones of the set of anatomical landmarks, wherein a second tier of the multi-tiered confidence score collection comprises pair-wise confidence scores respectively corresponding to anatomically symmetric pairs of the set of anatomical landmarks, wherein a third tier of the multi-tiered confidence score collection comprises group-wise confidence scores respectively corresponding to anthropometric groups of the set of anatomical landmarks, and wherein a fourth tier of the multi-tiered confidence score collection comprises surface-wise confidence scores respectively corresponding to surface-defining groups of the set of anatomical landmarks. Wels, Murthy and Contryman, neither alone nor in combination, teach every feature of claim 2. Dependent claims 3-6 are allowable as they depend on allowable claim 2. Claim 7 recites the explanatory factor comprises one or more of the following: that an imaging artifact or acquisition artifact is depicted in the three-dimensional voxel array; that a pathology is depicted in the three-dimensional voxel array; that the three-dimensional voxel array exhibits an incorrect field of view; that the three-dimensional voxel array exhibits an incorrect radiation dosage; or that the three-dimensional voxel array depicts an incorrect anatomy. Though Contryman discusses an explanatory factor (Col 13 ln 24-49), Wels, Murthy and Contryman, neither alone nor in combination, teach every feature of claim 7. Claim 8 recites the classifier component visually renders, on an electronic display, the classification label and an alert indicating that whichever of the set of anatomical landmarks localized by the first deep learning neural network that correspond to the one or more confidence scores should not be relied upon for downstream inferencing tasks. Wels, Murthy and Contryman, neither alone nor in combination, teach every feature of claim 8. Claims 10-16 and 18-20 are similarly allowable for the same reasons given above for claims 2-8. Note The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
Dec 09, 2025
Non-Final Rejection — §103
Mar 03, 2026
Interview Requested
Mar 23, 2026
Applicant Interview (Telephonic)
Apr 04, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585923
DESPARSIFIED CONVOLUTION FOR SPARSE ACTIVATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582478
SYSTEMS AND METHODS FOR INTEGRATING INTRAOPERATIVE IMAGE DATA WITH MINIMALLY INVASIVE MEDICAL TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12579650
IMPROVED SPINAL HARDWARE RENDERING
2y 5m to grant Granted Mar 17, 2026
Patent 12567496
METHOD AND APPARATUS FOR DISPLAYING AND ANALYSING MEDICAL SCAN IMAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547819
MODULAR SYSTEMS AND METHODS FOR SELECTIVELY ENABLING CLOUD-BASED ASSISTIVE TECHNOLOGIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
98%
With Interview (+27.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month