Prosecution Insights
Last updated: April 19, 2026
Application No. 19/102,589

ENDOSCOPE DIAGNOSIS PROGRAM, ENDOSCOPE DIAGNOSIS DEVICE, CONTROL METHOD FOR ENDOSCOPE DIAGNOSIS DEVICE, AND PROGRAM FOR GENERATING ENDOSCOPE DIAGNOSIS TRAINED MODEL

Non-Final OA §101§103§112
Filed
Feb 10, 2025
Examiner
YANG, YI-SHAN
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Suncreer Co. Ltd.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
262 granted / 380 resolved
-1.1% vs TC avg
Strong +57% interview lift
Without
With
+57.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
32.8%
-7.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 10, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings filed on February 10, 2025 are accepted. Claim Rejections - 35 USC § 101 Claims 10-14 and 16-18 are rejected under 35 U.S.C. 101 as the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because Claims 10-13 are directed to an endoscope diagnosis program. Claim 14 is directed to an endoscope diagnosis device comprising a number of units without associating the device and the units to any physical structure, hence the units are considered merely algorithms or programs. Claims 16-18 are directed to a trained model generation program. A “program” is a non-statutory subject matter. A program or an algorithm is known and defined in the art as software that is a collection of instructions that performs a specific task when executed by a computer. Therefore, the program is a software that is data per se, which does not fall within one of the four statutory categories. Alternatively, the broadest reasonable interpretation of the phrase “program " may include non-transitory embodiments, such as processor, memory elements (ROM, RAM) and memory media (CDs) as well as transitory embodiments, such as computer program steps or software algorithms. However, the program may include transitory forms that are not statutory (In re Nuijten, 84 USPQ2d 1495). A claim that covers both statutory and non-statutory embodiments embraces subject matter that is improperly directed to non-statutory subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 10 and 14 recites “a lesion presence/absence diagnosis unit”. Claims 11-13 recite “the lesion presence/absence diagnosis unit”. Claim 15 recites “a lesion presence/absence diagnosis step”. The term “presence/absence” renders the scope of the claims indefinite. It is unclear whether it refers to “presence or absence” or “presence and absence”. Clarification is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 10-18 are rejected under 35 U.S.C. 103 as being unpatentable over Oosake et al., JP2022162028A, hereinafter Oosake, in view of Park et al., “Reduced detection ate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm”. Front. Med. 2022, 9:1036974, hereinafter Park. Claims 10, 14 and 15. Oosake teaches in FIGS.1-3&6 an endoscope diagnosis program, device and method for diagnosing presence or absence of a lesion based on an endoscope image captured with an endoscope device, wherein the endoscope diagnosis program causing a computer to function ([0029]: an endoscopic image processing program that causes a computer to perform the…functions) as: a diagnostic-use image acquisition unit (the image sensor 17 and the endoscope control device 13 and the processor device 49) that acquires a diagnostic-use image of a diagnosis target site captured with an endoscope device (10, 11) ([0009]: an endoscopic image processing device comprising an endoscopic image acquisition unit that acquires endoscopic images captured by an endoscopic device); a color tone correction unit (65, 67) that acquires a reference color tone obtained by analyzing an image of a diagnosis target site captured with an endoscope device in advance and that performs color tone correction on the diagnostic-use image in accordance with the acquired reference color tone ([0009]: an image conversion processing unit that performs processing to convert the endoscopic images into images of standard image quality; [0017]: the image conversion processing unit converts the endoscopic image into an image of standard image quality based on a color tone adjustment value set in the endoscopic device; [0069]: the color tone is corrected by the image processing unit 67 based on instructions from the input device 23 under the control of the endoscope control unit 65; [0133]: when adjusting for changes in image quality due to changes in the amount of light, image processing may be performed using a preset image processing parameter for adjusting changes in image quality…When adjusting the color tone to suit the surgeon’s preference, the color tone may be corrected by performing image processing using image processing parameters for adjusting the color tone; [0135]: by acquiring information on the image quality adjustment a value set in the endoscope device, the image can be easily converted into an image of standard image quality) – the “[present] image processing parameter” is considered the ”reference color tone” as claimed; and a lesion presence/absence diagnosis unit (65, 67) that inputs the corrected diagnostic-use image into a trained model and that performs diagnosis processing of the presence or absence of a lesion based on a resulting output, the trained model having been generated through machine training of a plurality of training images of a diagnosis target site and subjected to color tone correction in accordance with the reference color tone ([0023]: the image recognition unit is composed of a convolutional neural network trained on images converted to a reference image quality; [0025]: the image recognition performed by the image recognition unit includes a process of detecting an area of interest and/or a process of classifying the recognition target; [0026]: a process of detecting an area of interest such as a lesion and/or a process of classifying a recognition target such as a lesion is performed by image recognition). Oosake does not teach that the plurality of training images of a diagnosis target site are captured with a plurality of types of endoscope devices. However, in an analogous endoscope image-based AI model training field of endeavor, Park teaches that the plurality of training images of a diagnosis target site are captured with a plurality of types of endoscope devices (p.2, Col. Right, 2.1 Collecting endoscopic images: ¶-1: The procedures were pictured using three endoscopic video processors, each quipped with an exclusive endoscope; ¶-2: the images stored in the PACS were extracted in PNG format, which supports full-color lossless data compression for AI training; p.3, FIG.2: flowchart of the study; and the content below FIG.2: among the images of the lower esophagus, those expressing the Z-line of the epithelial squamocolumnar junction were labeled as EGJ images) – all the images are of the lower esophagus, hence captured at a diagnosis target site. Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the claimed invention to have the training images of Oosake employ such a feature of being captured with a plurality of types of endoscope devices as taught in Park for the advantage of “developing practical AI that can be applied generally rather than limiting on a particular endoscope model ”, as suggested in Park, p.2, Col. Left, 1. Introduction. Claim 11. Oosake further teaches that the color tone correction unit performs the color tone correction and image quality enhancement processing on the diagnostic-use image ([0009]: an image conversion processing unit that performs processing to convert the endoscopic images into images of standard image quality; [0017]: the image conversion processing unit converts the endoscopic image into an image of standard image quality based on a color tone adjustment value set in the endoscopic device; [0069]: the color tone is corrected by the image processing unit 67 based on instructions from the input device 23 under the control of the endoscope control unit 65; [0133]: when adjusting for changes in image quality due to changes in the amount of light, image processing may be performed using a preset image processing parameter for adjusting changes in image quality…When adjusting the color tone to suit the surgeon’s preference, the color tone may be corrected by performing image processing using image processing parameters for adjusting the color tone; [0135]: by acquiring information on the image quality adjustment a value set in the endoscope device, the image can be easily converted into an image of standard image quality), and the lesion presence/absence diagnosis unit performs diagnosis processing of the presence or absence of a lesion by the trained model that has been generated through machine training of the training images that have undergone the same color correction and image quality enhancement processing as performed by the color tone correction unit ([0025]: the image recognition performed by the image recognition unit includes a process of detecting an area of interest and/or a process of classifying the recognition target; [0026]: a process of detecting an area of interest such as a lesion and/or a process of classifying a recognition target such as a lesion is performed by image recognition); [0140]: During machine learning, learning is performed using a group of images converted to the standard image quality…The “process of converting to standard image quality” can also be said to be a process of converting to the image quality of the group of images used for learning when the image recognition unit was generated). Claim 12. Oosake further teaches that the lesion presence/absence diagnosis unit performs, as an image abnormality detection algorithm, inputting the corrected diagnostic-use image into a trained model that has been generated through machine training of a plurality of the training images corrected in accordance with the reference color tone, and diagnosing the presence of a lesion when an abnormality in the diagnostic-use image is detected as a resulting output ([0023]: the image recognition unit is composed of a convolutional neural network trained on images converted to a reference image quality; [0025]: the image recognition performed by the image recognition unit includes a process of detecting an area of interest and/or a process of classifying the recognition target; [0026]: a process of detecting an area of interest such as a lesion and/or a process of classifying a recognition target such as a lesion is performed by image recognition). Park further teaches that the training images have images in which no lesion is captured (FIG.1: flowchart of the present study: in each of the training datasets and the validation datasets, there are EGJ images and other images. Those “other images” are images in which no lesion is captured). Claim 13. Oosake further teaches that the lesion presence/absence diagnosis unit performs, as an algorithm for detecting an image feature amount, inputting the corrected diagnostic-use image into a trained model that has been generated through machine training of a plurality of the training images corrected in accordance with the reference color tone and in which lesions are captured, and diagnosing the presence of a lesion when the feature amount of the diagnostic-use image is determined to be a predetermined value or more as a resulting output, and diagnosing the absence of a lesion when the feature amount of the diagnostic-use image is determined to be less than the predetermined value as a resulting output ([0023]: the image recognition unit is composed of a convolutional neural network trained on images converted to a reference image quality; [0025]: the image recognition performed by the image recognition unit includes a process of detecting an area of interest and/or a process of classifying the recognition target; [0026]: a process of detecting an area of interest such as a lesion and/or a process of classifying a recognition target such as a lesion is performed by image recognition). Park further teaches that the training images have images in which no lesion is captured (FIG.1: flowchart of the present study: in each of the training datasets and the validation datasets, there are EGJ images and other images. The EGJ images are the images in which lesions are capture. Those “other images” are images in which no lesion is captured). Claim 16. Oosake teaches a trained model generation program for endoscope diagnosis for generating a trained model used in endoscope diagnosis for diagnosing presence or absence of a lesion based on an endoscope image captured with an endoscope device ([0139]: the image recognition unit is configured with a convolutional neural network trained by machine learing…the image recognition unit can be configured with a trained model generated by machine learning), the trained model generation program causing a computer to function as: a training image acquisition unit that acquires a plurality of training images of a diagnosis target site ([0140]: during machine learning, learning is performed using a group of images converted to the standard image quality; [0141]: the endoscopic image to be recognized is configured to be acquired directly from the endoscopic device, but the source from which the endoscopic image is acquired is not limited to this For example, endoscopic images stored in other storage devices…recorded on a server, etc) – since the images from the storage devices or a server, it indicates that the plurality of training image are acquired by other devices; a color tone correction unit that acquires a reference color tone obtained by analyzing an image of a diagnosis target site captured with an endoscope device in advance and that performs color tone correction on each of the training images in accordance with the acquired reference color tone ([0140]: the “reference image quality” can also be said to be the image quality of the group of images used for learning when the image recognition unit is generated. The “process of converting to standard image quality” can also be said to be a process of converting to the image quality of the group of images used for learning when the image recognition unit was generated; [0009]: an image conversion processing unit that performs processing to convert the endoscopic images into images of standard image quality; [0017]: the image conversion processing unit converts the endoscopic image into an image of standard image quality based on a color tone adjustment value set in the endoscopic device; [0069]: the color tone is corrected by the image processing unit 67 based on instructions from the input device 23 under the control of the endoscope control unit 65; [0133]: when adjusting for changes in image quality due to changes in the amount of light, image processing may be performed using a preset image processing parameter for adjusting changes in image quality…When adjusting the color tone to suit the surgeon’s preference, the color tone may be corrected by performing image processing using image processing parameters for adjusting the color tone; [0135]: by acquiring information on the image quality adjustment a value set in the endoscope device, the image can be easily converted into an image of standard image quality) – the “[present] image processing parameter” is considered the ”reference color tone” as claimed); a trained model generation unit that generates a plurality of trained models by causing machine training to be performed using a machine training algorithm on each of the training images whose color tone has been corrected by the color tone correction unit ([0139]: the image recognition unit is configured with a convolutional neural network trained by machine learing…the image recognition unit can be configured with a trained model generated by machine learning; [0140]: during machine learning, learning is performed using a group of images converted to the standard image quality). Oosake does not teach that (1) the plurality of training images of a diagnosis target site are captured with a plurality of types of endoscope devices, and (2) a trained model determination unit that inputs a validation image whose presence or absence of a lesion has been validated to each of the generated trained models and that determines the trained model having a highest correct answer rate as a trained model to be used for endoscope diagnosis. However, in an analogous endoscope image-based AI model training field of endeavor, Park teaches that (1) the plurality of training images of a diagnosis target site are captured with a plurality of types of endoscope devices (p.2, Col. Right, 2.1 Collecting endoscopic images: ¶-1: The procedures were pictured using three endoscopic video processors, each quipped with an exclusive endoscope; ¶-2: the images stored in the PACS were extracted in PNG format, which supports full-color lossless data compression for AI training; p.3, FIG.2: flowchart of the study; and the content below FIG.2: among the images of the lower esophagus, those expressing the Z-line of the epithelial squamocolumnar junction were labeled as EGJ images) - all the images are of the lower esophagus, hence captured at a diagnosis target site; and (2) a trained model determination unit that inputs a validation image whose presence or absence of a lesion has been validated to each of the generated trained models and that determines the trained model having a highest correct answer rate as a trained model to be used for endoscope diagnosis (p.4, After the standardization process, the images were randomly extracted by case to include a similar number of EGJ images when classified by each endoscope model. The images were distributed in an approximately 8:2 ratio such that no cases intersected with one another and were classified into training and validation datasets, respectively…The best model to show the highest accuracy for the validation dataset was selected as the final model during 200 epochs; p.5, Col. Left, ¶-2: AI distinguished the endoscope model, in which a picture was validated with the highest softmax value for top 1 accuracy). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the claimed invention to have the training images of Oosake employ such features of (1) the plurality of training images of a diagnosis target site are captured with a plurality of types of endoscope devices, and (2) a trained model determination unit that inputs a validation image whose presence or absence of a lesion has been validated to each of the generated trained models and that determines the trained model having a highest correct answer rate as a trained model to be used for endoscope diagnosis as taught in Park for the advantage of “developing practical AI that can be applied generally rather than limiting on a particular endoscope model ”, as suggested in Park, p.2, Col. Left, 1. Introduction. Claim 17. In regard to the color tone correction, Oosake further teaches in [0077]: the color tone is what’s suits the surgeon’s preference; [0076]: the color tone correction is based on the adjustment value (step S3); [0073]: the endoscope control unit acquires information on the color tone adjustment value via the input device 23. Hence, how to set the color tone, i.e., “in a case where a lesion is captured in the training image, the color tone correction unit executes color tone correction in accordance with the reference color tone in an area other than a lesion existing area”, or any other setting, is considered a design choice It is a well-known knowledge in the field of art that the color tone correction is to normalize the image pixel intensities to a standard at interpretation. One of ordinary skill in the art would select a region as the reference color tone that is suitable for the particular utility of the data processing. Normalization may be done by choosing either the minimum or the maximum as the reference point. Either configuration would reasonably allow the color correction to be performed properly, where needed, involves only routine skill in the art to achieve with reasonable expectation of success. Claim 18. Oosake further teaches that the trained model generation program causes a computer to function as an affine transformation unit that performs an arbitrary affine transformation on the training image corrected by the color tone correction unit, and the trained model generation unit causes machine training to be performed on the training image transformed by the affine transformation unit together with the training image corrected by the color tone correction unit to generate a trained model ([0140]: the “reference image quality” can also be said to be the image quality of the group of images used for learning when the image recognition unit is generated. The “process of converting to standard image quality” can also be said to be a process of converting to the image quality of the group of images used for learning when the image recognition unit was generated; [0070]: the endoscope control unit sets a correction matrix for performing matrix correction on each intensity value (brightness value) of R, G, and B based on the color tone adjustment value input from the input device 23; [0072]: before correction R0, G0, B0, the correction matrix A, after matrix correction R1, G1, B1; [0073]: the coefficients aij of the correction matrix A are set based on the adjustment value of the color tone) – to convert the R0, G0, B0 to R1, G1, B1 for each pixel intensity via a matrix correction is considered the “affine transformation” as claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI-SHAN YANG whose telephone number is (408)918-7628. The examiner can normally be reached Monday-Friday 8am-4pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal M Bui-Pho can be reached at 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YI-SHAN YANG/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Feb 10, 2025
Application Filed
Jan 31, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594043
METHODS AND SYSTEMS FOR FAST FILTER CHANGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594003
DEVICE, SYSTEM AND METHOD FOR DETERMINING RESPIRATORY INFORMATION OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12594063
TISSUE IMAGING IN PRESENCE OF FLUID DURING BIOPSY PROCEDURE
2y 5m to grant Granted Apr 07, 2026
Patent 12592318
Neuronal Activity Mapping Using Phase-Based Susceptibility-Enhanced Functional Magnetic Resonance Imaging
2y 5m to grant Granted Mar 31, 2026
Patent 12575805
ULTRASOUND PROBE WITH AN INTEGRATED NEEDLE ASSEMBLY AND A COMPUTER PROGRAM PRODUCT, A METHOD AND A SYSTEM FOR PROVIDING A PATH FOR INSERTING A NEEDLE OF THE ULTRASOUND PROBE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+57.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month