Prosecution Insights
Last updated: April 18, 2026
Application No. 17/932,311

MEDICAL IMAGING DATA NORMALIZATION FOR ANIMAL STUDIES

Non-Final OA §103
Filed
Sep 15, 2022
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
2 (Non-Final)
82%
Grant Probability
Favorable
2-3
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: Claims 1-20 are examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/7/2024 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. An update search found that Valadez et al (US 2015/0023575) in view of DHARMAKUMAR et al (US 2022/0117508) address the Remarks and the Claims. Please see Office below for details. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Valadez et al (US 2015/0023575) in view of DHARMAKUMAR et al (US 2022/0117508). Claim 1: Valadez et al (US 2015/0023575) teaches the following subject matter: A system (figure 1 and 0003) for medical imaging data normalization for animal studies, the system comprising: a medical imaging device configured to acquire image data (figure 1 and 0005 detail image data from medical imaging device such as CAT scanner, MRI…etc with image data such as 2D and voxel/3D); a memory configured to store a standard model and a machine trained network for segmentation of image data (0027-0029 detail information storage for imaging data and atlas (standard model); 0033-0035 detail storage system for collection of digital and advance processing and operation; figure 2 and 0044 further detail atlas as model/standard model that are aligned and then segmented; figure 5 and 0054 detail atlas (standard model) and learning classification (machine trained network)); and a processor configured to register the image data to the standard model (figure 5 part 506-510 teaches register to region of interest (ROI) of atlas) and segment the registered image data using the machine trained network (figures 5 part 512-514 detail next process of segmenting, where paragraph 0053-0054 further detail segmentation with local classification (or learning/machine learning)), the processor further configured to warp the segmented registered image data to the image data and generate an output image for the subject based on the warped segmented registered image data (FIG. 5, at 512, image segmentation unit 107 inversely maps the registered or warped first ROI to the space of the subject's image data to generate a segmentation mask of the anatomical region). Valadez et al teaches all the subject matter above, but not the following: for a non-human subject DHARMAKUMAR et al (US 2022/0117508) teaches for a non-human subject (0016-0018 teaches medical imaging with Pet and MRI of subject of interest; 0200 detail PET scanning of dog, where 0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103) Valadez et al and DHARMAKUMAR et al are both in the field of image analysis, especially normalizing/standardizing medical imaging (DHARMAKUMAR et al paragraph 0262) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Valadez et al by DHARMAKUMAR et al to include non-human would expanding this approach to pixel-wise assessment of myocardial oxygenation would open the door for testing novel physiological hypotheses, coronary artery disease, understanding could provide new insights that can improve our understanding of how angina develops in patients with microvascular disease and evaluate therapies to alleviate microvascular impairments in oxygenation. Studies of this nature are likely to demand more advanced segmentation and registration approaches so that pixel-wise analysis can be accurately performed as disclosed by DHARMAKUMAR et al in 0236. Claim 2: Valadez et al teach: The system of claim 1, wherein the medical imaging device comprises an MRI device, a CT device, a cone-beam CT, an X-ray device, or a PET device (figure 1 and 0032 detail radiology scanner such as a magnetic resonance (MR) scanner, PET/MR, X-ray or a CT scanner.). Claim 3: DHARMAKUMAR et al teach: The system of claim 1, wherein the non-human subject comprises a canine (0016-0018 detail medical imaging with PET and MRI of subject of interest; 0200 detail PET scanning of dog, where 0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103). Claim 4: DHARMAKUMAR et al teach: The system of claim 3, wherein the standard model comprises an average anatomical model for a plurality of different breeds of canines (0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103). Claim 5: Valadez et al teach: The system of claim 1, wherein the processor is configured to register the image data by deforming a scale and a location of the image data to match one or more landmarks shared by the image data and standard model (above detail deform/warp, and figure 5 and 0046 regions corresponding (matching) with landmarks). Claim 6: Valadez et al teach: The system of claim 1, wherein the processor is configured to segment the image data into one or more regions of interest, wherein each region of interest is registered to a respective standard model of a respective region (figure 5 and 0051-0053 detail 510, image segmentation unit 107 non-rigidly registers a first region of interest (ROI) ). Claim 8: DHARMAKUMAR et al teach: The system of claim 1, further comprising: a display configured to display the output image for the non-human subject (0077 detail display map with confidence of BOLD-CMR images, each set of MR data over time for parts of the body such as the heart). Claim 9: Valadez et al (US 2015/0023575) teaches the following subject matter: A computer (0028) implemented method comprising: acquiring image data (figure 1 and 0005 detail image data from medical imaging device such as CAT scanner, MRI…etc with image data such as 2D and voxel); registering the image data to a standardized model (figure 5 part 506-510 teaches register to region of interest (ROI) of atlas (standard model)); identifying one or more features in the registered image data using a machine learned model (0054-0056 detail local classification (or learning/machine learning) with registration to local regions); and providing the one or more features to an operator (0036 detail include a user interface that allows a radiologist or any other skilled user (e.g., physician, technician, operator, scientist, etc.), to manipulate the image data.). Valadez et al teaches all the subject matter above, but not the following: for a non-human subject. DHARMAKUMAR et al (US 2022/0117508) teaches: for a non-human subject (0016-0018 detail medical imaging with Pet and MRI of subject of interest; 0200 detail PET scanning of dog, where 0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103) Valadez et al and DHARMAKUMAR et al are both in the field of image analysis, especially normalizing/standardizing medical imaging (DHARMAKUMAR et al paragraph 0262) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Valadez et al by DHARMAKUMAR et al to include non-human would expanding this approach to pixel-wise assessment of myocardial oxygenation would open the door for testing novel physiological hypotheses, coronary artery disease, understanding could provide new insights that can improve our understanding of how angina develops in patients with microvascular disease and evaluate therapies to alleviate microvascular impairments in oxygenation. Studies of this nature are likely to demand more advanced segmentation and registration approaches so that pixel-wise analysis can be accurately performed as disclosed by DHARMAKUMAR et al in 0236. Claim 10: DHARMAKUMAR et al teach: The computer implemented method of claim 9, wherein the non-human subject is a dog (0018 detail medical imaging with Pet and MRI of subject of interest; 0200 detail PET scanning of dog, where 0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103). Claim 11: DHARMAKUMAR et al teach: The computer implemented method of claim 10, wherein the standardized model is generated from a plurality of breeds of dogs including multiple size variations (0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103; 0103 detail reference images for canines, where different canines are different sizes). Claim 12: Valadez et al teach: The computer implemented method of claim 9, wherein the image data is acquired by an MRI device, a CT device, a cone-beam CT, an X-ray device, or a PET device (figure 1 and 0032 detail radiology scanner such as a magnetic resonance (MR) scanner, PET/MR, X-ray or a CT scanner). Claim 13: Valadez et al teach: The computer implemented method of claim 9, wherein registering the image data comprises deformable registration (above detail deform/warp, and figure 5 and 0046 regions corresponding (matching) with landmarks). Claim 14: DHARMAKUMAR et al teach: The computer implemented method of claim 9, wherein the one or more features comprise a location and classification of an organ of the non-human subject (above detail canines (non-human), where 0029 detail organ such as the heart as a whole, portion and section). Claim 17: Valadez et al (US 2015/0023575) teaches the following subject matter: A method (figures 2 and 5 detail flowchart (method)) for generating training data (figure 8 and 0052; claim 8 teaches training local predictor), the method comprising: acquiring image data (figure 1 and 0005 detail image data from medical imaging device such as CAT scanner, MRI…etc with image data such as 2D and voxel); deforming the image data to fit a model (FIG. 5, at 512, image segmentation unit 107 inversely maps the registered or warped first ROI to the space of the subject's image data to generate a segmentation mask of the anatomical region); training a network to generate one or more predictions when input new image data, wherein the training uses the image data, the deformed image data, and respective annotations as training data (0031 detail local predictor to learn and refine registration to shape and generate segmentation from patient-specific and local are view as training data; figure 10-11 and 0055-0057); and storing the network for use in feature detection for a medical imaging procedure of a subject (0027-0029 detail information storage for imaging data and atlas (standard model); 0033-0035 detail storage system for collection of digital and advance processing and operation; figure 2 and 0044 further detail atlas as model/standard model, where 0031 further detail data that is patient-specific (of a subject)). Valadez et al teaches all the subject matter above, but not the following: for a non-human subject; a second non-human subject DHARMAKUMAR et al (US 2022/0117508) teaches: for a non-human subject (0016-0018 detail medical imaging with Pet and MRI of subject of interest; 0200 detail PET scanning of dog, where 0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103); of a second non-human subject (0077 detail animal model (second non-human subject/model/reference/template for registering with) of the same parts with fMRI; 0225 teaches healthy dog model (6-segments)) Valadez et al and DHARMAKUMAR et al are both in the field of image analysis, especially normalizing/standardizing medical imaging (DHARMAKUMAR et al paragraph 0262) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Valadez et al by DHARMAKUMAR et al to include non-human would expanding this approach to pixel-wise assessment of myocardial oxygenation would open the door for testing novel physiological hypotheses, coronary artery disease, understanding could provide new insights that can improve our understanding of how angina develops in patients with microvascular disease and evaluate therapies to alleviate microvascular impairments in oxygenation. Studies of this nature are likely to demand more advanced segmentation and registration approaches so that pixel-wise analysis can be accurately performed as disclosed by DHARMAKUMAR et al in 0236. Claim 18: DHARMAKUMAR et al teach: The method of claim 17, wherein the first non-human subject and the second non-human subject are different breeds of a same species (0062 detail further species of dogs such as wolf, as well as other canines in paragraph 0103; 0103 detail reference images for canines). Claim 19: Valadez et al teach: The method of claim 17, wherein deforming the image data comprises scaling or warping the image data so that one or more shared landmarks in the image data and model are aligned (above detail deform/warp, and figure 5 and 0046 regions corresponding (matching) with landmarks). Claim 20: DHARMAKUMAR et al teaches The method of claim 17, wherein the image data is acquired using an MRI system (above and 0103 detail MRI imaging of canine) and the medical imaging procedure is for radiation treatment of the non-human subject (0077 detail radiation treatment for human subject/patient, as well as subjects mentioned above such as dogs/canines). Claims 7 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Valadez et al (US 2015/0023575) in view of DHARMAKUMAR et al (US 2022/0117508) as applied to claim 1 and 9, respectfully above, and further in view of Nguyen et al (2020/0075148). Claim 7: Valadez et al and DHARMAKUMAR et al teaches all the subject matter above, but not the following: The system of claim 1, wherein the output image of the machine trained network is used to adjust a dose of radiation for radiation treatment. Nguyen et al (2020/0075148) teaches the following subject matter: The system of claim 1, wherein the output image of the machine trained network is used to adjust a dose of radiation for radiation treatment (0203 detail use of deep neural network for treatment planning by means of dose generation and treatment planner). Valadez et al and DHARMAKUMAR et al and Nguyen et al are both in the field of image analysis, especially normalizing image/volume of image data (Nguyen et al paragraph 0118-00120 teaches target volume and dimension normalized with neural network in relation to dose) for studies or analysis such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Valadez et al and DHARMAKUMAR et al by Nguyen et al such physician can view the dose distribution pushed toward the desired various critical structure in real time as disclosed by Nguyen et al in 0203. Claim 15: Valadez et al and DHARMAKUMAR et al teaches all the subject matter above, but not the following: The computer implemented method of claim 9, wherein the one or more features are used to generate a plan for radiation treatment. Nguyen et al (2020/0075148) teaches the following subject matter: The computer implemented method of claim 9, wherein the one or more features are used to generate a plan for radiation treatment (0203 detail use of system using deep neural network for treatment planning by means of dose generation and treatment planner). Valadez et al and DHARMAKUMAR et al and Nguyen et al are both in the field of image analysis, especially normalizing image/volume of image data (Nguyen et al paragraph 0118-00120 teaches target volume and dimension normalized with neural network in relation to dose) for studies or analysis such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Valadez et al and DHARMAKUMAR et al by Nguyen et al such physician can view the dose distribution pushed toward the desired various critical structure in real time as disclosed by Nguyen et al in 0203. Claim 16: Nguyen et al teaches: The computer implemented method of claim 15, further comprising: implementing the plan (0203 detail use of system for treatment by means of dose generation and treatment planner). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. LIU et al (US 2021/0366137) teaches DEVICE AND METHOD FOR ALIGNMENT OF MULTI-MODAL CLINICAL IMAGES USING JOINT SYNTHESIS, SEGMENTATION, AND REGISTRATION – figure 3 and 0056 detail The generator model generates a synthesized image from the moving image conditioned on the fixed image; the register model estimates the spatial transformation to align the synthesized image to the fixed image; and the segmentor model estimates segmentation maps of the moving image, the fixed image, and the synthesized image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Sep 15, 2022
Application Filed
Dec 08, 2025
Non-Final Rejection — §103
Mar 03, 2026
Response Filed
Apr 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month