Prosecution Insights
Last updated: April 19, 2026
Application No. 18/699,133

IMAGE DIAGNOSIS APPARATUS, METHOD FOR OPERATING IMAGE DIAGNOSIS APPARATUS, AND PROGRAM

Non-Final OA §102
Filed
Apr 05, 2024
Examiner
HUYNH, VAN D
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Sapporo Medical University
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
630 granted / 721 resolved
+25.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
30.9%
-9.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 7, and 10-13 are objected to because of the following informalities: Claim 1 recites “a tomographic image” in lines 4 and 11, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Claim 7 recites “a tomographic image” in lines 4 and 12, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Claim 10 recites “a tomographic image” in lines 5 and 12, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Claim 11 recites “a tomographic image” in lines 5 and 13, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Claim 12 recites “a tomographic image” in lines 5 and 12, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Claim 13 recites “a tomographic image” in lines 5 and 13, Examiner suggests replacing “a tomographic image” with –the tomographic image--. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3-4, 6-13 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hirsch et al., “Segmentation of MRI head anatomy using deep volumetric networks and multiple spatial priors”. Regarding claim 1, Hirsch discloses an image diagnosis apparatus, comprising: an acquirer to acquire a tomographic image including a region to be diagnosed of a subject (Section 1 Introduction; first paragraph and Section 2.8 Training and Testing Data; first paragraph; Clinical and basic research require segmentation of magnetic resonance images (MRIs) of human heads, including abnormal anatomies such as tumors or lesions; The training data consist of T1-weighted MRI scans from 4 healthy subjects and 43 individuals who suffered a stroke. The strokes occurred at least 6 month prior to the MRI scan, at which point the lesion is largely replaced by CSF. MRI scans from normal subjects were obtained on a 3T Siemens Trio scanner (Erlangen, Germany). The stroke scans were collected at Georgetown University and the University of North Carolina, Chapel Hill, also on a 3T Siemens Trio scanner. The trained network was also applied to MRI images of 47 patients with disorders of consciousness collected at the Pitié-Salpêtrière University Hospital in Paris, on a 3T General Electric Signa system (Milwaukee, Wisconsin)); and a drawer to draw, based on a tomographic image acquired by the acquirer, a labeled image including the region to be diagnosed, the labeled image being partitioned according to classes each of which indicates one of a lesion area, a cavity area, a soft tissue area, a bone area, and a background area, the labeled image having a unique pixel value with respect to each class (Section 2.8 Training and Testing Data; second paragraph and Figure 5; Specifically, the 43 stroke heads are first segmented automatically and then manually corrected for errors in particular around the stroke lesions and boundaries between CSF, gray matter, and skull, resulting in seven classes (background, air cavities, skin, bone, CSF, white matter, and gray matter); The manual segmentation is on the first column, followed by the T1-weighted MRI that is used as an input for the network, next are segmentations from (a), (b) the detail + context network and (c) SPM8 compared to the segmentation from the Multiprior. Each color represents one of each of seven tissue-classes used for classification: black, background; brown, skin; yellow, bone/skull; green, air/sinus cavities; light blue, CSF; white, white matter; and gray, gray matter. Notice the large CSF-filled lesion in (a)), wherein the drawer estimates, based on a model that is generated by machine learning and that, with respect to input of a pixel value of each pixel in a tomographic image, outputs a pixel value of each pixel in a labeled image, a pixel value of each pixel in a labeled image from a tomographic image acquired by the acquirer (Section 2.1 Detail CNN and Figure 1; Multiprior network structure. The detail network (black path) consists of a 3D CNN with eight layers. During training, this network takes as input a patch of 253 voxels around a target patch of 93 to be classified (green cube). The size of the convolutional kernels mapping between layer is indicated by numbers to the right. For instance, 33 × 50 × 30 indicates a 3D convolution kernel of size 33 transforming 30 features to 50 features. The “context” network (red path) is identical in structure to the “detail” network, except that it processes a downsampled version of a larger FOV of 573 voxels during training. It includes an upsampling layer at the end to merge features at the same scale as the detail network. Prior probabilities for the target patch are extracted from a TPM and added as input to the final classification (blue arrow). The “classification” network (purple) takes the concatenated output of all three pathways as input and classifies the target patch with three fully connected layers and no additional spatial mixing (kernel of size 13). After the entire image has been segmented, a 3D CRF processes the resulting output segmentation while taking the original input image into account (green arrow). Arrows indicate copying). Regarding claim 3, the image diagnosis apparatus according to claim 1, Hirsch further discloses wherein the lesion area is a tumor area in a brain, the tomographic image is an image of a cross section of a brain of a subject, the cross section being obtained by slicing the brain in a transverse plane direction at a plurality of points, and the drawer draws a labeled image corresponding to each tomographic image acquired by the acquirer (Figures 3, 5-7, and 11-13). Regarding claim 4, the image diagnosis apparatus according to claim 3, Hirsch further discloses wherein the tumor area is an area where a metastatic brain tumor has developed (Figure 5, last sentence). Regarding claim 6, the image diagnosis apparatus according to claim 1, Hirsch further discloses wherein the image diagnosis apparatus further includes a trainer to generate the model by machine learning (Section 1 Introduction, last paragraph; Section 2.3 Classification Network; and Section 2.8 Training and Testing Data). Regarding claim 7, Hirsch discloses an image diagnosis apparatus, comprising: an acquirer to acquire a tomographic image including a region to be diagnosed of a subject (Section 1 Introduction; first paragraph and Section 2.8 Training and Testing Data; first paragraph; Clinical and basic research require segmentation of magnetic resonance images (MRIs) of human heads, including abnormal anatomies such as tumors or lesions; The training data consist of T1-weighted MRI scans from 4 healthy subjects and 43 individuals who suffered a stroke. The strokes occurred at least 6 month prior to the MRI scan, at which point the lesion is largely replaced by CSF. MRI scans from normal subjects were obtained on a 3T Siemens Trio scanner (Erlangen, Germany). The stroke scans were collected at Georgetown University and the University of North Carolina, Chapel Hill, also on a 3T Siemens Trio scanner. The trained network was also applied to MRI images of 47 patients with disorders of consciousness collected at the Pitié-Salpêtrière University Hospital in Paris, on a 3T General Electric Signa system (Milwaukee, Wisconsin)); a drawer to draw, based on a tomographic image acquired by the acquirer, a labeled image including the region to be diagnosed, the labeled image being partitioned according to classes each of which indicates one of at least a lesion area, a normal tissue, and a background area, the labeled image having a unique pixel value with respect to each class (Section 2.8 Training and Testing Data; second paragraph and Figure 5; Specifically, the 43 stroke heads are first segmented automatically and then manually corrected for errors in particular around the stroke lesions and boundaries between CSF, gray matter, and skull, resulting in seven classes (background, air cavities, skin, bone, CSF, white matter, and gray matter); The manual segmentation is on the first column, followed by the T1-weighted MRI that is used as an input for the network, next are segmentations from (a), (b) the detail + context network and (c) SPM8 compared to the segmentation from the Multiprior. Each color represents one of each of seven tissue-classes used for classification: black, background; brown, skin; yellow, bone/skull; green, air/sinus cavities; light blue, CSF; white, white matter; and gray, gray matter. Notice the large CSF-filled lesion in (a)), and a trainer to generate a model that is generated by machine learning and that, with respect to input of a pixel value of each pixel in a tomographic image, outputs a pixel value of each pixel in a labeled image, wherein the drawer estimates, based on the model generated by the trainer, a pixel value of each pixel in a labeled image from a pixel value of each pixel in a tomographic image acquired by the acquirer (Section 2.1 Detail CNN and Figure 1; Multiprior network structure. The detail network (black path) consists of a 3D CNN with eight layers. During training, this network takes as input a patch of 253 voxels around a target patch of 93 to be classified (green cube). The size of the convolutional kernels mapping between layer is indicated by numbers to the right. For instance, 33 × 50 × 30 indicates a 3D convolution kernel of size 33 transforming 30 features to 50 features. The “context” network (red path) is identical in structure to the “detail” network, except that it processes a downsampled version of a larger FOV of 573 voxels during training. It includes an upsampling layer at the end to merge features at the same scale as the detail network. Prior probabilities for the target patch are extracted from a TPM and added as input to the final classification (blue arrow). The “classification” network (purple) takes the concatenated output of all three pathways as input and classifies the target patch with three fully connected layers and no additional spatial mixing (kernel of size 13). After the entire image has been segmented, a 3D CRF processes the resulting output segmentation while taking the original input image into account (green arrow). Arrows indicate copying), and the trainer generates the model, using teacher data that include a pixel value of each pixel in a tomographic image as input data and a pixel value of each pixel in a labeled image as output data, the labeled image being generated based on the tomographic image and having a weight of each class adjusted based on a number of counted pixels of the class (Section 2.7 Cost Function and Section 2.8 Training and Testing Data; third paragraph; Training was set to reduce the generalized Dice loss between the predicted segmentation by the network and the ground-truth provided by the manual segmentations…The inner product · sums over all elements of the 3D volume. C is the total number of classes (C = 7 in this case); During network training, four heads were kept out for validation purposes, measuring generalization performance during training epochs and used to define the stopping point, i.e., the epoch with maximum Dice score on the validation set. This procedure was used for training all convolutional neural networks (CNNs) (Multiprior, DeepMedic, and U-Net variants)). Regarding claim 8, the image diagnosis apparatus according to claim 6, Hirsch further discloses wherein the trainer generates the model, using teacher data generated based on a plurality of tomographic images that has cross sections obtained by slicing a brain of each of a plurality of subjects in a transverse plane direction at a plurality of points (Section 1 Introduction, last paragraph; Section 2.3 Classification Network; and Section 2.8 Training and Testing Data). Regarding claim 9, the image diagnosis apparatus according to claim 1, Hirsch further discloses wherein the image diagnosis apparatus further includes an outputter to color-code a labeled image with respect to each class, the labeled image being drawn by the drawer (Figures 5-7 and 11-13). Regarding claim 10, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Regarding claim 11, this claim recites substantially the same limitations that are performed by claim 7 above, and it is rejected for the same reasons. Regarding claim 12, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Regarding claim 13, this claim recites substantially the same limitations that are performed by claim 7 above, and it is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ferl et al., US 2023/0005140 discloses methods and systems disclosed herein relate generally to processing images to estimate whether at least part of a tumor is represented in the images. Schmidt et al., US 2008/0292194 discloses a method and system for segmenting an object represented in one or more input images, each of the one or more input images comprising a plurality of pixels. Wang et al., US 2022/0229140 discloses quantitatively mapping material intrinsic physical properties using signals collected in magnetic resonance imaging. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN D HUYNH/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602798
METHOD AND APPARATUS FOR GENERATING SUBJECT-SPECIFIC MAGNETIC RESONANCE ANGIOGRAPHY IMAGES FROM OTHER MULTI-CONTRAST MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602784
MEDICAL DEVICE FOR TRANSCRIPTION OF APPEARANCES IN AN IMAGE TO TEXT WITH MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12594046
METHOD AND APPARATUS FOR ASSISTING DIAGNOSIS OF CARDIOEMBOLIC STROKE BY USING CHEST RADIOGRAPHIC IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12586186
JAUNDICE ANALYSIS SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12582345
Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month