Prosecution Insights
Last updated: April 19, 2026
Application No. 18/537,688

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM

Non-Final OA §102
Filed
Dec 12, 2023
Examiner
LAM, ANDREW H
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
1y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
457 granted / 542 resolved
+22.3% vs TC avg
Moderate +7% lift
Without
With
+6.8%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
9 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 542 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The action is responsive to the following communication: an application filed on 12/12/2023 where: Claims 1-9 are currently pending. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9 and 11-12 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lerousseau et al. (US 2024/0153085, hereinafter Lerousseau). Regarding claim 1, Lerousseau teaches: An image processing apparatus (fig. 1, computer device 1) comprising: at least one processor (fig. 1, processor 4), wherein the processor derives a degree of attention for each organ based on a content of a medical document ([0073], The training set comprises labelled medical images. [0075], each image is associated with a label or annotation comprising quantitative information about the number of pixels of the image that belong, for each of labelled classes. Said information may be expressed as a percentage or to the proportion within the image, or as a number indicating an absolute value.), sets an execution condition of abnormality detection processing on a medical image according to the degree of attention for each organ ([0076], Classes may indicate the presence or not of an apparent tumoural tissue for the concerned pixels. For example, a label value of 0,6 for the class “tumour” may indicate that 60% percent of the pixels represents tumour tissue on that image. As mentioned before, such label may be given by a specialized physician and considered as a ground-truth label.), and executes the abnormality detection processing in accordance with the set execution condition ([0083], In a third step S3, the processor computes a prediction, using a decision system, for each pixel of at least part of said sub-images or instances, the probability that said pixel or said feature belongs to each of the above classes, the prediction from the decision system being in the form of a prediction tensor. [0116], Additionally, a visual approximation of the percentage of tumour tissue, relative to the whole tissue extent, was computed by pathologists on TCGA. For instance, a slide with no apparent tumour tissue was assigned a percentage of 0%, a slide with only tumour tissue was assigned a percentage of 100%, and a whole slide image with half tumour tissue and half non tumour tissue was assigned a percentage of 50%. These labels are publicly and freely available in TOGA, denoted by the identifier “percent_tumour_cells”.). Regarding claim 2, Lerousseau teaches: The image processing apparatus according to claim 1, wherein the processor derives the degree of attention based on a sentence described in at least one of a finding or a diagnosis result in the medical document ([0075], Each image is associated with a label or annotation comprising quantitative information about the number of pixels of the image that belong, for each of labelled classes. [0076], For example, a label value of 0,6 for the class “tumour” may indicate that 60% percent of the pixels represents tumour tissue on that image.). Regarding claim 3, Lerousseau teaches: The image processing apparatus according to claim 2, wherein the processor derives the degree of attention based on at least one of an appearance frequency of a relevant word for each organ, an appearance frequency of a relevant sentence for each organ, the number of characters in the relevant sentence for each organ, or a degree of complexity of the relevant sentence for each organ ([0076], As mentioned before, such label may be given by a specialized physician and considered as a ground-truth label. Other label value may concern other classes, for example classes associated to necrotic tissue or healthy tissue. In the case of percentage, the sum of the label value may or not be equal to 1, i.e. equal to 100%.). Regarding claim 4, Lerousseau teaches: The image processing apparatus according to claim 2, wherein the processor derives the degree of attention by inputting at least a part of the medical document corresponding to a medical image of a diagnosis target to a trained model that receives at least a part of the medical document as an input and outputs the degree of attention, the trained model being trained using at least a part of a plurality of sets of the medical documents and the degree of attention, as learning data ([0076-78], Classes may indicate the presence or not of an apparent tumoural tissue for the concerned pixels. For example, a label value of 0,6 for the class “tumour” may indicate that 60% percent of the pixels represents tumour tissue on that image. As mentioned before, such label may be given by a specialized physician and considered as a ground-truth label. Other label value may concern other classes, for example classes associated to necrotic tissue or healthy tissue. In the case of percentage, the sum of the label value may or not be equal to 1, i.e. equal to 100%. ). Regarding claim 5, Lerousseau teaches: The image processing apparatus according to claim 1, wherein the processor executes the abnormality detection processing by inputting a medical image of a diagnosis target to a plurality of trained models that receive the medical image as an input and output region information representing an abnormal region of the medical image (see fig. 2, [0072], In a first step S1, an image from a training set is selected. Said training set is obtained through the input interface 2. The training set comprises labelled medical images.) and a degree of certainty that the abnormal region is abnormal, the plurality of trained models being trained for each organ using a plurality of sets of the medical images, the region information, and the degree of certainty, as learning data, and the execution condition includes execution necessity of the abnormality detection processing for each organ, and a detection threshold value used for comparison with the degree of certainty (fig. 2, and fig. 3, [0076], Classes may indicate the presence or not of an apparent tumoural tissue for the concerned pixels. For example, a label value of 0,6 for the class “tumour” may indicate that 60% percent of the pixels represents tumour tissue on that image. As mentioned before, such label may be given by a specialized physician and considered as a ground-truth label. Other label value may concern other classes, for example classes associated to necrotic tissue or healthy tissue. In the case of percentage, the sum of the label value may or not be equal to 1, i.e. equal to 100%). Regarding claim 6, Lerousseau teaches: The image processing apparatus according to claim 5, wherein the processor sets the detection threshold value used for comparison with the degree of certainty output from the trained model corresponding to the organ to a larger value, as the degree of attention of the organ is higher ([0101], For instance, for an input microscopic image with provided quantitative label of 40%, 40% of the outputs with highest values are assigned a value of 1 for error computation for a set of input instances extracted from the input microscopic image, while the 60% remainder of instances outputs are assigned a value of 0. In other words, in such case, all pixels of 40% of the sub-images may be assigned the probability of 1, and the pixels of the other sub-images may be assigned the probability of 0.[0102] In a more general embodiment, the number n of predicted classes may be greater than 1. In this context, the label of the corresponding image is a vector or a set of n values. Each value may represent the percentage of pixels of said image that belong to the corresponding class.). Regarding claim 7, Lerousseau teaches: The image processing apparatus according to claim 5, wherein the processor does not execute the abnormality detection processing on an organ of which the degree of attention is equal to or more than a threshold value, and executes the abnormality detection processing on an organ of which the degree of attention is less than the threshold value ([0105], In this case, only a subset of said pixels can be assigned a value, while the remainder pixels are not assigned any pseudo ground-truth value. These pixels and their corresponding decision system outputs are masked in the next step when computing a loss or cost function and are thus not used to update the parameters of the decision system as described below. The percentage of pixels to be discarded can be pre-defined, or randomly sampled for each training image or for a plurality of training image.). Claims 8 and 9 are rejected for reasons similar to claim 1 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW H LAM whose telephone number is (571)270-7969 and fax number is 571-270-8969. The examiner can normally be reached on 9AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW H LAM/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602791
VISUAL SEGMENTATION OF DOCUMENTS CONTAINED IN FILES
2y 5m to grant Granted Apr 14, 2026
Patent 12593000
IMAGE-FORMING SYSTEM, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586404
METHOD AND SYSTEM FOR RELEVANT DATA EXTRACTION FROM A DOCUMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12575887
SURGICAL SYSTEMS, ANATOMICAL MODELS AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 17, 2026
Patent 12581018
INFORMATION PROCESSING SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
91%
With Interview (+6.8%)
1y 11m
Median Time to Grant
Low
PTA Risk
Based on 542 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month