Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,487

ENDOSCOPIC IMAGE PROCESSING DEVICE, ENDOSCOPIC IMAGE PROCESSING METHOD, AND ENDOSCOPE SYSTEM

Non-Final OA §102
Filed
Apr 11, 2024
Examiner
LU, TOM Y
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
91%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
826 granted / 941 resolved
+25.8% vs TC avg
Minimal +3% lift
Without
With
+3.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
23 currently pending
Career history
964
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
28.7%
-11.3% vs TC avg
§102
37.2%
-2.8% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 941 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/04/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-12, 14-16 and 18-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Usuda (U.S. Publication No. 2020/0294227 A1). As per claim 1, Usuda discloses an endoscopic image processing device (figure 1) that processes an endoscopic image, the endoscopic image processing device comprising: a processor (figure 1, processor 12), wherein the processor acquires the endoscopic image (figure 2, video images 38), recognizes a state of an organ (paragraph [0071]: the state of an organ may be inflammation) as an examination target from the acquired endoscopic image, sets a detection criterion for a region of interest according to a recognition result of the state of the organ (paragraph [0072]-[0073]: feature quantities are analyzed for detecting a region of interest, and detection criterion may be feature quantities in colors, gradient of pixel values, a shape or a size), and detects the region of interest from the endoscopic image on the basis of the set detection criterion (paragraph [0074]: the combination of detection unit 41 and acquisition unit 42 detect a region of interest based on the feature quantities and region-of interest information, such as coordinates and presence of the ROI). As per claim 2, Usuda discloses wherein the processor recognizes the state of the organ from the endoscopic image in which a specific region of the organ is imaged (a region of interest is imaged to recognize inflammation of an organ). As per claim 3, Usuda discloses wherein the processor recognizes the state of the organ from a plurality of endoscopic images in which different regions of the organ are imaged (paragraphs [0010]-[0011] & [0057], a plurality of endoscopic images are captured to detect a region of interest during the movement of the endoscope). As per claim 4, Usuda discloses wherein the endoscopic image used for recognizing the state of the organ is the endoscopic image in which a relatively wider range than the endoscopic image used for detecting the region of interest is imaged (see figures 4-5). As per claim 5, Usuda discloses wherein the processor recognizes the state of the organ by recognizing a state regarding histological abnormalities in a mucous membrane from the endoscopic image (paragraph [0071]: “an endoscopic mucosal resection (EMR) scar, an endoscopic submucosal dissection (ESD) scar”). As per claim 6, Usuda discloses wherein the processor acquires a plurality of endoscopic images captured in chronological order, recognizes the state of the organ from a first endoscopic image among the plurality of endoscopic images, and detects the region of interest from a second endoscopic image different from the first endoscopic image, among the plurality of endoscopic images (the image acquisition unit 40 is a time-series image acquisition unit 40). As per claim 8, Usuda discloses wherein the second endoscopic image is the endoscopic image captured temporally later than the first endoscopic image (as explained above, the endoscopic images are captured in a time-series). As per claim 9, Usuda discloses wherein the processor displays information regarding the endoscopic image and the state of the organ recognized from the endoscopic image on a display device (see figures 1-2 for display section 16 and figures 4-5 for displayed endoscopic images). As per claim 10, Usuda discloses wherein the processor notifies of a detection result of the region of interest in a different mode according to setting of the detection criterion (the detection result is notified on a display screen). As per claim 11, Usuda discloses wherein the processor notifies of the endoscopic image to be displayed on a display device by surrounding the detected region of interest with a frame, and displays the frame in a display aspect according to the setting of the detection criterion (see figures 4-5). As per claim 12, Usuda discloses wherein the processor detects the region of interest from the endoscopic image using a trained model, and sets the trained model to be used for detecting the region of interest, according to the recognition result of the state of the organ (the convolutional neural network in paragraph [0070] is the claimed “trained model”). As per claim 14, Usuda discloses wherein the processor recognizes the state of the organ by recognizing a state regarding inflammation and/or atrophy of the mucous membrane as the state regarding the histological abnormalities in the mucous membrane (paragraph [0071]). As per claim 15, Usuda discloses wherein the processor recognizes a state regarding pylori infection of a stomach (Usuda teaches the state may be an inflammation of an organ, although Usuda does not explicitly teach the organ is a stomach, it is understood the inflammation in Usuda can be any organ, including pylori infection of a stomach). As per claim 16, Usuda discloses wherein the processor recognizes uninfected, currently infected, and eradicated states as the state regarding the pylori infection of the stomach (Usuda teaches the endoscopic image is capable of detecting heathy tissue, inflammation tissue and scars). As per claim 18, Usuda discloses wherein the processor recognizes a state regarding Barrett's esophagus of an esophagus (as explained above, Usuda in paragraph [0071] teaches “Examples of a region of interest includes a polyp, a cancer, the colonic diverticula, an inflammation, an endoscopic mucosal resection (EMR) scar, an endoscopic submucosal dissection (ESD) scar, a clipped portion, a bleeding point, a perforation, blood vessel heteromorphism, a treatment tool, and the like”. The examiner notes the detection of a region of interest can be intended for different illnesses, such as “Barrett’s esophagus”). As per claim 19, Usuda discloses wherein the processor recognizes a state regarding an inflammatory bowel disease of a large intestine (paragraph [0071]: “colonic diverticula”). As per claim 20, Usuda discloses wherein the processor recognizes the state of the organ by dividing the state into three or more states, and sets the detection criterion according to the recognized state of the organ (paragraph [0072]: “tumor”, “non-tumor” and “others”). As per claim 21, Usuda discloses wherein the processor acquires information on a light source type, and sets the detection criterion according to the recognition result of the state of the organ and the light source type (paragraph [0061]: “white light” or “special light”, the examiner notes the feature quantities of the images, ie. normal light image, special light image, are based on the light source types). As per claim 22, Usuda discloses an endoscopic image processing method of performing processing of detecting a region of interest from an endoscopic image using a trained model, the endoscopic image processing method comprising: acquiring information on a state of an organ as an examination target; and setting the trained model to be used according to the state of the organ (see explanation in claim 1, the examiner notes a CNN model in paragraph [0070] is the claimed “trained model” for setting the feature quantities to detect a state of the organ, such as an inflammation in paragraph [0071]). As per claim 23, Usuda discloses an endoscopic image processing method of performing processing of detecting region-of-interest candidates from an endoscopic image by calculating a confidence level indicating probability, and detecting the region-of-interest candidate of which the confidence level is equal to or greater than a threshold value, from among the detected region-of-interest candidates, as a region of interest (the preamble of the claim is not considered a limitation and is of no significance to claim construction. See Pitney Bowes, Inc. v. Hewlett-Packard Co., 182 F.3d 1298, 1305, 51 USPQ2d 1161, 1165 (Fed. Cir. 1999). See MPEP § 2111.02), the endoscopic image processing method comprising: acquiring information on a state of an organ as an examination target; and setting the threshold value according to the state of the organ (see explanation in claim 1, and the examiner notes the “size” or “shape” is the claimed “threshold value” for feature quantities). As per claim 24, see figure 1 for the claimed endoscope system. Allowable Subject Matter Claims 7, 13 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOM Y LU whose telephone number is (571)272-7393. The examiner can normally be reached Monday - Friday, 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272 - 7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOM Y LU/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597133
TRAINING END-TO-END WEAKLY SUPERVISED NETWORKS AT THE SPECIMEN (SUPRA-IMAGE) LEVEL
2y 5m to grant Granted Apr 07, 2026
Patent 12591967
DISPLACEMENT ESTIMATION OF INTERVENTIONAL DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12591296
REDUCING POWER CONSUMPTION OF EXTENDED REALITY DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12573037
LEARNING APPARATUS, LEARNING METHOD, TRAINED MODEL, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12564867
METHOD AND DEVICE FOR DETECTING CONTAINERS WHICH HAVE FALLEN OVER AND/OR ARE DAMAGED IN A CONTAINER MASS FLOW
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
91%
With Interview (+3.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 941 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month