DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/15/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karade et al. (US2021/0007806) in view of Onozuka et al. (US2024/0362771) and Feilkas (US2013/0094742).
To claim 1, Karade teach a method for correcting a 2D measurement value (abstract, Fig. 28), the method comprising:
receiving 2D image data of an examination object (paragraphs 0075-0076);
determining a value of a 2D measurement based on the 2D image data;
detecting landmarks in the 2D image data; estimating 2D positions of the landmarks (Fig. 3A; paragraphs 0085-0089); and
predicting a corrected 2D measurement value of the examination object using a trained model (Figs. 23, 28; paragraphs 0195-0225), the trained model based on the 2D image data (paragraphs 0168-0169, 0174, template projection contour points may be adapted to the input contour using self-organizing maps technique, wherein the learning process of said adaptability is considered trained), the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (paragraphs 0095, 0109-0121).
But, Karade do not expressly disclose wherein the corrected 2D measurement value is a value of the 2D measurement with the examination object in the reference 3D orientation.
Onozuka teach a visual inspection method correcting 2D coordinates of an examination object with reference to 3D data of the examination subject (Figs. 3, 5; paragraphs 0007, 0036-0053).
In furthering similar imaging process in medical application, Feilkas teach a method for determining an imaging direction and calibration of an imaging apparatus (abstract), wherein an obtained 2D image is corrected based on said 2D image and the 3D reference data (paragraphs 0103-0112), wherein imaging direction can be defined as an orientation or a spatial angle of the imaging beam with respect to the object (paragraph 0010), and wherein first imaging direction can differ from a second imaging direction (Figs. 2A-3C; paragraphs 0054, 0115, 0019).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teachings of Onozuka and Feilkas into the method of Karade with machine learning based algorithm, in order to improve adaptability on 2D measurement correction.
To claim 12, Karade, Onozuka and Feilkas teach a correction device, comprising:
an input interface to receive 2D image data of an examination object; a landmark detection unit to detect landmarks in the 2D image data, and to estimate 2D positions of the landmarks; and a prediction unit to predict a corrected measurement value of the examination object using a trained model, the trained model based on the 2D image data, the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (as explained in response to claim 1 above).
To claim 13, Karade, Onozuka and Feilkas teach a medical imaging system, comprising:
an acquisition unit to acquire measuring data from an examination object; a post-processor to generate post-processed 2D image data based on the measuring data; and
the correction device according to claim 12 (as explained in response to claim 12 above).
To claim 14, Karade, Onozuka and Feilkas teach a non-transitory computer program product with a computer program, which is loadable into a memory device of a medical imaging system, the computer program including program sections that, when executed by the medical imaging system, cause the medical imaging system to perform the method according to claim 1 (as explained in response to claim 1 above).
To claim 15, Karade, Onozuka and Feilkas teach a non-transitory computer readable medium storing program sections that, when executed by at least one processor of a medical imaging system, cause the medical imaging system to perform the method according to claim 1 (as explained in response to claim 1 above).
To claim 16, Karade, Onozuka and Feilkas teach a correction device, comprising: a memory storing computer-executable instructions; and at least one processor configured to execute the computer-executable instructions to cause the correction device to detect landmarks in 2D image data of an examination object, estimate 2D positions of the landmarks, and predict a corrected measurement value of the examination object using a trained model, the trained model based on the 2D image data, the 2D positions of the landmarks and a reference parameter of a reference 3D orientation of the examination object (as explained in response to claim 1 above).
To claim 2, Karade, Onozuka and Feilkas teach claim 1.
Karade, Onozuka and Feilkas teach wherein the examination object comprises at least one of an organ of a patient, a part of a body of the patient, a limb of the patient, or a chest of the patient (Karade, Fig. 28).
To claim 3, Karade, Onozuka and Feilkas teach claim 1.
Karade, Onozuka and Feilkas teach wherein the trained model comprises a first trained model and a second trained model to be carried out one after another (obvious in Karade, page 521, having different trained models for various optimizers and learning parameters).
To claim 4, Karade, Onozuka and Feilkas teach claim 3.
Karade, Onozuka and Feilkas teach wherein an input of the first trained model includes the 2D image data, and an output of the first trained model includes estimated 3D orientation parameters (Karade, paragraphs 0152-0153, 0164).
To claim 5, Karade, Onozuka and Feilkas teach claim 4.
Though Karade, Onozuka and Feilkas do not expressly disclose wherein an input for the second trained model includes the estimated 3D orientation parameters, the reference parameter of the reference 3D orientation of the examination object, and the value of the 2D measurement to be corrected, the value of the 2D measurement value depending on the 2D positions of the landmarks, and an output of the second trained model includes the corrected 2D measurement value, such feature is well-known practice in the art which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Karade, Onozuka and Feilkas, in order to further implement detail of trained models, hence Official Notice is taken.
To claims 6, 17 and 18, Karade, Onozuka and Feilkas teach claims 3, 4 and 5.
Though Karade, Onozuka and Feilkas do not expressly disclose wherein an input of the second trained model includes a difference between an estimated 3D orientation and the reference 3D orientation, such feature is well-known practice in the art which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Karade, Onozuka and Feilkas, in order to further implement detail of trained models, hence Official Notice is taken.
To claim 7, Karade, Onozuka and Feilkas teach claim 3.
Karade, Onozuka and Feilkas teach wherein the first trained model is trained based on a multitude of synthetic 2D images of the examination object corresponding to different 3D orientation parameters (Karade, Fig. 19; paragraphs 0005, 0095, 0125-0128, 0152-0153).
To claim 8, Karade, Onozuka and Feilkas teach claim 4.
Karade, Onozuka and Feilkas teach further comprising: estimating the estimated 3D orientation parameters, the estimating including segmenting anatomical structures of the 2D image data,
localizing the segmented anatomical structures, and predicting 3D orientation parameters based on positions of the localized segmented anatomical structures (Karade, paragraphs 0084, 0096, 0109-0119, 0146-0148).
To claim 9, Karade, Onozuka and Feilkas teach claim 8.
Karade, Onozuka and Feilkas teach wherein the first trained model is configured to carry out the segmenting, which is trained by a multitude of synthetic 2D images, and wherein the output of the first trained model includes a label mask with segmentations (as explained in response to claims 7-8 above).
To claim 10, Karade, Onozuka and Feilkas teach claim 9.
Karade, Onozuka and Feilkas teach wherein measurement values concerning the positions of the localized segmented anatomical structures are taken from the label mask, and the 3D orientation parameters are predicted based on the measurement values (as explained in response to claims 8-9 above).
To claims 11, 19 and 20, Karade, Onozuka and Feilkas teach claims 4, 5 and 8.
Though Karade, Onozuka and Feilkas do not expressly disclose wherein at least one of estimating of the estimated 3D orientation parameters in the 2D image data includes estimating a probability density function of the estimated 3D orientation parameters in the 2D image data, or the predicting of the corrected measurement value of the examination object includes determining a probability density function of the corrected measurement value, feature such as providing probability density function or error estimation is well-known practice in the art, which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate for improving measurement result, hence Official Notice is taken.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached on (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 January 14, 2026