Prosecution Insights
Last updated: April 19, 2026
Application No. 17/948,454

CORRECTION OF GEOMETRIC MEASUREMENT VALUES FROM 2D PROJECTION IMAGES

Non-Final OA §103
Filed
Sep 20, 2022
Examiner
LU, ZHIYU
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Siemens Healthcare GmbH
OA Round
3 (Non-Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
63%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
374 granted / 759 resolved
-12.7% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
57 currently pending
Career history
816
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 759 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/15/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karade et al. (US2021/0007806) in view of Onozuka et al. (US2024/0362771) and Feilkas (US2013/0094742). To claim 1, Karade teach a method for correcting a 2D measurement value (abstract, Fig. 28), the method comprising: receiving 2D image data of an examination object (paragraphs 0075-0076); determining a value of a 2D measurement based on the 2D image data; detecting landmarks in the 2D image data; estimating 2D positions of the landmarks (Fig. 3A; paragraphs 0085-0089); and predicting a corrected 2D measurement value of the examination object using a trained model (Figs. 23, 28; paragraphs 0195-0225), the trained model based on the 2D image data (paragraphs 0168-0169, 0174, template projection contour points may be adapted to the input contour using self-organizing maps technique, wherein the learning process of said adaptability is considered trained), the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (paragraphs 0095, 0109-0121). But, Karade do not expressly disclose wherein the corrected 2D measurement value is a value of the 2D measurement with the examination object in the reference 3D orientation. Onozuka teach a visual inspection method correcting 2D coordinates of an examination object with reference to 3D data of the examination subject (Figs. 3, 5; paragraphs 0007, 0036-0053). In furthering similar imaging process in medical application, Feilkas teach a method for determining an imaging direction and calibration of an imaging apparatus (abstract), wherein an obtained 2D image is corrected based on said 2D image and the 3D reference data (paragraphs 0103-0112), wherein imaging direction can be defined as an orientation or a spatial angle of the imaging beam with respect to the object (paragraph 0010), and wherein first imaging direction can differ from a second imaging direction (Figs. 2A-3C; paragraphs 0054, 0115, 0019). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teachings of Onozuka and Feilkas into the method of Karade with machine learning based algorithm, in order to improve adaptability on 2D measurement correction. To claim 12, Karade, Onozuka and Feilkas teach a correction device, comprising: an input interface to receive 2D image data of an examination object; a landmark detection unit to detect landmarks in the 2D image data, and to estimate 2D positions of the landmarks; and a prediction unit to predict a corrected measurement value of the examination object using a trained model, the trained model based on the 2D image data, the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (as explained in response to claim 1 above). To claim 13, Karade, Onozuka and Feilkas teach a medical imaging system, comprising: an acquisition unit to acquire measuring data from an examination object; a post-processor to generate post-processed 2D image data based on the measuring data; and the correction device according to claim 12 (as explained in response to claim 12 above). To claim 14, Karade, Onozuka and Feilkas teach a non-transitory computer program product with a computer program, which is loadable into a memory device of a medical imaging system, the computer program including program sections that, when executed by the medical imaging system, cause the medical imaging system to perform the method according to claim 1 (as explained in response to claim 1 above). To claim 15, Karade, Onozuka and Feilkas teach a non-transitory computer readable medium storing program sections that, when executed by at least one processor of a medical imaging system, cause the medical imaging system to perform the method according to claim 1 (as explained in response to claim 1 above). To claim 16, Karade, Onozuka and Feilkas teach a correction device, comprising: a memory storing computer-executable instructions; and at least one processor configured to execute the computer-executable instructions to cause the correction device to detect landmarks in 2D image data of an examination object, estimate 2D positions of the landmarks, and predict a corrected measurement value of the examination object using a trained model, the trained model based on the 2D image data, the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (as explained in response to claim 1 above). To claim 2, Karade, Onozuka and Feilkas teach claim 1. Karade, Onozuka and Feilkas teach wherein the examination object comprises at least one of an organ of a patient, a part of a body of the patient, a limb of the patient, or a chest of the patient (Karade, Fig. 28). To claim 3, Karade, Onozuka and Feilkas teach claim 1. Karade, Onozuka and Feilkas teach wherein the trained model comprises a first trained model and a second trained model to be carried out one after another (obvious in Karade, page 521, having different trained models for various optimizers and learning parameters). To claim 4, Karade, Onozuka and Feilkas teach claim 3. Karade, Onozuka and Feilkas teach wherein an input of the first trained model includes the 2D image data, and an output of the first trained model includes estimated 3D orientation parameters (Karade, paragraphs 0152-0153, 0164). To claim 5, Karade, Onozuka and Feilkas teach claim 4. Though Karade, Onozuka and Feilkas do not expressly disclose wherein an input for the second trained model includes the estimated 3D orientation parameters, the reference parameter of the reference 3D orientation of the examination object, and the value of the 2D measurement to be corrected, the value of the 2D measurement value depending on the 2D positions of the landmarks, and an output of the second trained model includes the corrected 2D measurement value, such feature is well-known practice in the art which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Karade, Onozuka and Feilkas, in order to further implement detail of trained models, hence Official Notice is taken. To claims 6, 17 and 18, Karade, Onozuka and Feilkas teach claims 3, 4 and 5. Though Karade, Onozuka and Feilkas do not expressly disclose wherein an input of the second trained model includes a difference between an estimated 3D orientation and the reference 3D orientation, such feature is well-known practice in the art which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Karade, Onozuka and Feilkas, in order to further implement detail of trained models, hence Official Notice is taken. To claim 7, Karade, Onozuka and Feilkas teach claim 3. Karade, Onozuka and Feilkas teach wherein the first trained model is trained based on a multitude of synthetic 2D images of the examination object corresponding to different 3D orientation parameters (Karade, Fig. 19; paragraphs 0005, 0095, 0125-0128, 0152-0153). To claim 8, Karade, Onozuka and Feilkas teach claim 4. Karade, Onozuka and Feilkas teach further comprising: estimating the estimated 3D orientation parameters, the estimating including segmenting anatomical structures of the 2D image data, localizing the segmented anatomical structures, and predicting 3D orientation parameters based on positions of the localized segmented anatomical structures (Karade, paragraphs 0084, 0096, 0109-0119, 0146-0148). To claim 9, Karade, Onozuka and Feilkas teach claim 8. Karade, Onozuka and Feilkas teach wherein the first trained model is configured to carry out the segmenting, which is trained by a multitude of synthetic 2D images, and wherein the output of the first trained model includes a label mask with segmentations (as explained in response to claims 7-8 above). To claim 10, Karade, Onozuka and Feilkas teach claim 9. Karade, Onozuka and Feilkas teach wherein measurement values concerning the positions of the localized segmented anatomical structures are taken from the label mask, and the 3D orientation parameters are predicted based on the measurement values (as explained in response to claims 8-9 above). To claims 11, 19 and 20, Karade, Onozuka and Feilkas teach claims 4, 5 and 8. Though Karade, Onozuka and Feilkas do not expressly disclose wherein at least one of estimating of the estimated 3D orientation parameters in the 2D image data includes estimating a probability density function of the estimated 3D orientation parameters in the 2D image data, or the predicting of the corrected measurement value of the examination object includes determining a probability density function of the corrected measurement value, feature such as providing probability density function or error estimation is well-known practice in the art, which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate for improving measurement result, hence Official Notice is taken. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached on (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ZHIYU . LU Primary Examiner Art Unit 2669 /ZHIYU LU/Primary Examiner, Art Unit 2665 January 14, 2026
Read full office action

Prosecution Timeline

Sep 20, 2022
Application Filed
Nov 30, 2024
Non-Final Rejection — §103
Feb 14, 2025
Interview Requested
Feb 20, 2025
Examiner Interview Summary
Feb 20, 2025
Applicant Interview (Telephonic)
Mar 04, 2025
Response Filed
Apr 08, 2025
Final Rejection — §103
Jun 26, 2025
Interview Requested
Jul 09, 2025
Response after Non-Final Action
Aug 01, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Examiner Interview Summary
Sep 15, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §103
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601695
METHOD FOR MEASURING THE DETECTION SENSITIVITY OF AN X-RAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597268
METHOD AND DEVICE FOR DETERMINING LANE OF TRAVELING VEHICLE BY USING ARTIFICIAL NEURAL NETWORK, AND NAVIGATION DEVICE INCLUDING SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12596187
METHOD, APPARATUS, AND SYSTEM FOR WIRELESS SENSING MEASUREMENT AND REPORTING
2y 5m to grant Granted Apr 07, 2026
Patent 12592052
INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581142
APPROACHES FOR COMPRESSING AND DISTRIBUTING IMAGE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
63%
With Interview (+13.9%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 759 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month