Prosecution Insights
Last updated: April 19, 2026
Application No. 18/184,889

Whole-Body Anatomical Digital Twin from Partial Medical Images

Non-Final OA §102§103
Filed
Mar 16, 2023
Examiner
BEG, SAMAH A
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Siemens Healthcare
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
249 granted / 317 resolved
+16.5% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
16 currently pending
Career history
333
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 317 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Group I, including claims 1-15 and 20-21, in the reply filed on September 8, 2025 is acknowledged. Claims 16-19 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on September 8, 2025. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5, 7 and 14-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “OSSO: Obtaining skeletal shape from outside” (hereinafter “Keller”; published 2022). Regarding claim 1, Keller discloses a method for generating a whole-body representation of a patient, the method comprising (Keller, Fig. 3, Sections 3-4, p.20495-20498): acquiring at least one medical image representing only part of a patient (Keller, Fig. 3, Sections 3-4, p.20495-20498; DXA image pairs representing soft tissue and skeletal information are acquired and used as input, the soft tissue image being used to obtain a skin mask corresponding to body surface data, which is subsequently used as input to the learned regressor for skeleton inference); and generating a representation of a whole body of the patient from the at least one medial image representing only part of the patient, the representation of the whole body representing interior and exterior anatomy of the patient, the representation generated, at least in part, by a machine-learned model (Keller, Figs. 3-5, Section 4-4.3, p.20497-20498; “at test time, given an arbitrary STAR body shape in an arbitrary pose, we predict a skeleton mesh (Sec. 4.2) and then repose it to match the input skin pose (Sec. 4.3). See Fig. 3”). Regarding claim 2, claim 1 is incorporated, and Keller further discloses wherein acquiring comprises acquiring just the at least one medical image representing only part of the patient and wherein generating comprises generating from just the at least one medical image (Keller, Figs. 3-5, Section 4-4.3, p.20497-20498; “at test time, given an arbitrary STAR body shape in an arbitrary pose, we predict a skeleton mesh (Sec. 4.2) and then repose it to match the input skin pose (Sec. 4.3). See Fig. 3”). Regarding claim 3, claim 1 is incorporated, and Keller further discloses wherein generating comprises generating where a part of the representation of the whole body of the patient is not represented in any information used to generate the representation of the whole body, the at least one medical image not representing the part of the representation of the whole body (Keller, Figs. 3-5, Section 4-4.3, p.20497-20498; “at test time, given an arbitrary STAR body shape in an arbitrary pose, we predict a skeleton mesh (Sec. 4.2) and then repose it to match the input skin pose (Sec. 4.3). See Fig. 3”- the skeleton is not represented in the input body shape image). Regarding claim 5, claim 1 is incorporated, and Keller further discloses wherein generating comprises segmenting anatomy of the part of the patient represented in the at least one medical image, the segmented anatomy used in optimization with the machine-learned model, the machine-learned model outputting information used to form part of the representation of the whole body not represented in the input (Keller, Sections 3-4, p.20495-20498; “A key step is to estimate the 3D body shape of a subject from their 2D DXA segmentation MS” wherein the skeleton shape is subsequently predicted from the input body surface shape determined from the segmentation). Regarding claim 7, claim 1 is incorporated, and Keller further teaches wherein generating comprises generating with the machine-learned model comprising a per-organ group implicit generative shape model (Keller, Conclusion, p.20499; “We use STAR [23] to represent the skin surface, and use a novel method to learn a parametric shape model of the anatomical skeleton using thousands of DXA scans. We learn a mapping from the external body shape to the skeleton and can repose the skeleton inside the body subject to various constraints”). Regarding claim 14, claim 1 is incorporated, and Keller further discloses wherein generating comprises generating by the machine-learned model outputting the representation of the whole body as a single output (Keller, Figs. 3-5, Section 4-4.3, p.20497-20498; “at test time, given an arbitrary STAR body shape in an arbitrary pose, we predict a skeleton mesh (Sec. 4.2) and then repose it to match the input skin pose (Sec. 4.3). See Fig. 3”). Regarding claim 15, claim 1 is incorporated, and Keller further discloses wherein generating comprises reconstructing skin as the exterior anatomy of the whole body and enforcing consistency with the skin for generating of the interior anatomy not represented in the at least one medical image (Keller, Figs. 3-5, Section 4-4.3, p.20497-20498; “at test time, given an arbitrary STAR body shape in an arbitrary pose, we predict a skeleton mesh (Sec. 4.2) and then repose it to match the input skin pose (Sec. 4.3). See Fig. 3”). Claims 1, 8-9 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US PG PUB. 2018/0228460A1 (hereinafter “Singh”). Regarding claim 1, Singh discloses a method for generating a whole-body representation of a patient, the method comprising (Singh, Fig. 8, 0059-0088): acquiring at least one medical image representing only part of a patient (Singh, ¶0064; “an outer body surface test image 102 of a subject (patient) is input to the first encoder 201 e”); and generating a representation of a whole body of the patient from the at least one medial image representing only part of the patient, the representation of the whole body representing interior and exterior anatomy of the patient, the representation generated, at least in part, by a machine-learned model (Singh, Figs. 8-9C, ¶0059-0088; “FIG. 9C is a predicted volumetric CT image generated by the prediction block 119 during the test phase. A 2.5D or 3D depth image is taken to collect body surface data 102 from the patient. The body surface data 102 are input to encoder 201 e. Based on the body surface data 102, the prediction block 119 predicted the surface of the lungs as shown in FIG. 9C. ”). Regarding claim 8, claim 1 is incorporated, and Singh further discloses wherein generating comprises generating with the machine-learned model comprising an autodecoder (Singh, Figs. 7-8, ¶0067; “At step 816, the second decoder 202 d of the second autoencoder 202 decodes the internal organ surface latent variables or principal components 104′ to obtain internal organ surface data 104.”). Regarding claim 9, claim 1 is incorporated, and Singh further discloses wherein generating comprises optimizing a latent vector by sampling in three-dimensional space in a trained manifold (Singh, ¶0042; “After sufficient training samples, the hidden layer Z of autoencoder 202 provides the values of the latent variables or principal components 104′ representing the organ surface, and the decoder 202 d of the second autoencoder is configured to predict the organ surface 104 based on the values of the latent variables or principal components 104′. Note that the manifold of the organ generally differs from the manifold of the body surface.”). Regarding claim 20, Singh discloses a medical imaging system (Singh, Fig. 3A-3B, Fig. 5, Fig. 8) comprising: a medical imager configured to scan only part of a patient (Singh, Fig. 3A, 3D camera, ¶0064; “an outer body surface test image 102 of a subject (patient) is input to the first encoder 201 e”)); an image processor configured to form a whole-body avatar from the scan of only part of the patient, the whole-body avatar formed by a first machine-learned implicit generative shape model optimization of a latent vector to segmentations from the scan (Figs. 5 - 9C, ¶0039-0040, 0059-0088; “FIG. 9C is a predicted volumetric CT image generated by the prediction block 119 during the test phase. A 2.5D or 3D depth image is taken to collect body surface data 102 from the patient. The body surface data 102 are input to encoder 201 e. Based on the body surface data 102, the prediction block 119 predicted the surface of the lungs as shown in FIG. 9C ” wherein “The result of the prediction is a set 104′ of principal coordinates or latent variables representing the surface of the internal organ 104.” Wherein “embodiments perform regression on a manifold in which every point represents a valid shape in a metric space, thus enforcing the shape constraints.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Keller, as applied to claim 1 above, in view of US PG PUB. 2021/0358595 A1 (hereinafter “Tamersoy”). Regarding claim 12, claim 1 is incorporated, and Keller does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Tamersoy does as follows. Tamersoy teaches wherein generating comprises generating a shape of a same organ multiple times as part of recursive operation and forming a representation for the same organ from the shapes of the multiple times based on recursion depth of the recursive operation (Tamersoy, ¶0066; “an averaging may be applied such that where medical imaging data sets having similar landmarks representing the same point on the predefined pose which are not aligned substantially perfectly, then the landmark positions may be averaged.”). Tamersoy is considered analogous art because it pertains to generating a body representation of a patient based on medical images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by Keller to include averaging multiple positions of similar landmarks representing a same point in the predefined pose, as taught by Tamersoy, in order to reduce the error between the pose and the detected landmarks (Tamersoy, ¶0066). Regarding claim 13, claim 1 is incorporated, and Keller does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Tamersoy does as follows. Tamersoy teaches wherein generating comprises generating signed distance functions for the interior anatomy by the machine-learned model (Tamersoy, ¶0070-0073; “ the machine learning representation uses the sampled points and randomized body representation to determine a signed distance and refines the body representation based on the signed distance output by the machine learning system. If a signed distance calculated based on the sampled point and medical imaging is substantially similar to a signed distance determined by the machine learning system, then the current body representation input to the machine learning algorithm is selected. Else, the body representation is adjusted, and the process is repeated until the output of the machine learning system is substantially similar to the determined signed distance. As such, this process may iterate until the best body representation is selected”). Tamersoy is considered analogous art because it pertains to generating a body representation of a patient based on medical images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by Keller to include determining signed distances of sampled points relative to the body surface, as taught by Tamersoy, in order to obtain the most accurate body representation (Tamersoy, ¶0072). Allowable Subject Matter Claims 4, 6, 10-11 and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The closest prior art of record, either alone or in combination, does not expressly teach or render obvious the additional limitations of each of the abovementioned dependent claims. Regarding claim 4, neither Keller, Tamersoy nor Singh expressly teaches or suggests “acquiring the at least one medical image as representing one or more first organs, a part of one or more second organs, and skin of the patient, the at least one medical image not representing one or more third organs, and wherein generating the representation of the whole body comprises generating the representation as including the skin, the first organs, the second organs, and the third organs.” At most, Singh discloses generating a whole body representation of the skin and only one organ type. Regarding claim 6, neither Keller, Tamersoy nor Singh expressly teaches or suggests “wherein segmenting comprises segmenting the anatomy as a first organ, part of a second organ, and skin, and wherein the machine-learned model outputs the information for the entire second organ in response to input of the first organ, part of the second organ, and the skin”. At most, Singh teaches segmenting a body surface from a CT image and outputting predicted shape of the first organ (lung) in response to input of the lung. Similarly, Keller teaches obtaining a body surface from a DXA image to infer skeleton shape, but does not teach the use of a first organ, part of a second organ, and the skin in determining information for the entire second organ, as claimed. Regarding claim 10, none of the cited references expressly teaches or suggests the limitations of “wherein generating comprises estimating shapes of surrounding organs to first shapes represented in the at least one medical image, the estimating occurring recursively using the machine-learned model for a first organ group and a second machine-learned model for another organ group, the shapes of the surrounding organs and the first shapes included in the representation of the whole body of the patient”. Claim 11 is objected to based on its dependency from claim 10. Regarding claim 21, none of the cited references expressly teaches or suggests “wherein the whole-body avatar is formed by multiple machine-learned implicit generative shape models including the first machine-learned implicit generative shape model, different ones of the multiple machine-learned implicit generative shape models forming different organs of the whole-body avatar in a hierarchy of related organs.” At most, both Keller and Singh teach generating a whole-body representation including just one organ of interest based on a body surface, but do not expressly teach or suggest a hierarchy of related organs being formed by multiple implicit generative shape models, as claimed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The additionally cited art pertains generally to generating whole body representations from medical images, generating internal organ surfaces from medical data, and/or generative shape modeling. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMAH A BEG whose telephone number is (571)270-7912. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMAH A BEG/Primary Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §102, §103
Nov 19, 2025
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599284
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597142
SYSTEMS AND METHODS FOR PREPROCESSING IMMUNOCYTOCHEMISTRY IMAGES FOR MACHINE LEARNING IMAGE-TO-IMAGE TRANSLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12573015
METHOD FOR CAPTURING IMAGE MATERIAL FOR MONITORING IMAGE-ANALYSING SYSTEMS, DEVICE AND VEHICLE FOR USE IN THE METHOD AND COMPUTER PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12561806
COMPUTE SYSTEM WITH EXPLAINABLE AI FOR SKIN LESIONS ANALYSIS MECHANISM AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12536618
ARTIFICIAL-INTELLIGENCE-BASED IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+29.9%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 317 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month