Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,942

LEARNING SUPPORT DEVICE, ENDOSCOPE SYSTEM, METHOD FOR SUPPORTING LEARNING, AND RECORDING MEDIUM

Non-Final OA §103
Filed
Mar 25, 2024
Examiner
HERNANDEZ, ALEJANDRO
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Olympus Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
28 granted / 37 resolved
+13.7% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 37 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 3, 5, 6, 9, and 11 – 13 are rejected under 35 U.S.C. 103 as being unpatentable over Usuda; Toshihiro et al. (US 20200285876 A1; hereinafter simply referred to as Usuda) in view of Takemura; Tomoaki et al. (US 20150051617 A1; hereinafter simply referred to as Takemura) Regarding independent claim 1, Usuda teaches: A learning support device that supports formation of a learning model that recognizes a treatment instrument within an endoscopic image the learning support device comprising a processor (See ¶ 41 – 44, 225 – 227, figures 1 and 13, wherein an endoscopic image learning device, ‘10’ in figure 1, is disclosed that supports a learning model, for recognizing a treatment instrument/tool within an endoscopic image, wherein the device comprises a processor, ‘116’ in figure 13) generate a foreground image including at least one treatment instrument by placing an image of the at least one treatment instrument within an image region based on placement data, (See ¶ 88, 93, 129 wherein the foreground image, ‘30A’ in figure 7, is generated by placing the treatment instrument/tool within an image region based on placement data (positional/distance relationships)) form a training image by superimposing the foreground image on a background image. (See ¶ 88 – 90, Figures 7 and 4, wherein a superimposed image (training image), ‘38A’ in figure 7, is created by the image generation unit, ‘38’ in figure 4, wherein the foreground image ‘30A’ in figure 7, is the treatment instrument/tool which is superimposed on the background-endoscopic image (background image), ‘36A’ in figure 7). Usuda does not explicitly disclose the placement data showing a three-dimensional placement of the at least one treatment instrument as viewed through an endoscope. However, Takemura disclose the placement data being data showing a three-dimensional placement of the at least one treatment instrument as viewed through an endoscope (See ¶ 104, 93, 34, Figures 6A and 6B and 25A wherein the placement data is data showing a three-dimensional image/coordinates/placement of the treatment/surgical instrument as viewed through an endoscope). As taught by Takemura the three-dimensional data allows for the position of the endoscope to be displayed more accurately. (See ¶ 26 wherein the three-dimensional coordinates allow for a better display of the endoscope within the displayed images). As both the teachings of Usuda and Takemura deal with the technical field of image processing using endoscopes, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Usuda with Takemura to teach of the placement data showing a three-dimensional placement of the at least one treatment instrument as viewed through an endoscope in order for the position of the endoscope to be displayed more accurately. Regarding dependent claim 2, Usuda in view of Takemura teaches: The placement data includes a three-dimensional position and orientation of each of the at least one treatment instrument as viewed through the endoscope (See Takemura ¶ 93 and 104 wherein the three-dimensional orientation and position (three-dimensional coordinates) of the treatment/surgical instrument is obtained as viewed through the endoscope). Regarding dependent claim 3, Usuda in view of Takemura teaches: The placement data includes distance information relating to at least one distance between the endoscope and each of the at least one treatment instrument (See Takemura ¶ 93 and 104 wherein the distance information relating to at least one distance between the endoscope and the each of the at least one instrument, is represented by the coordinate data of each of the instrument and endoscope, necessarily providing the distance information between them) adjust the brightness of each of at least one treatment instrument within the foreground image based on the distance information; (See Usuda ¶ 62, 64, and 88 wherein the brightness of the treatment instrument/tool is adjusted via the use of color conversion performed on the foreground image ‘30A’, in figure 7, necessarily based on the distance information as the foreground image is acquired based on the distance information, wherein the distance information is taught by Takemura ¶ 93 and 104 as explained above) and generate the training image by superimposing the foreground image, which is adjusted, on the background image. (See Usuda ¶ 88 – 90, 62, 64 Figures 7 and 4, wherein a superimposed image (training image), ‘38A’ in figure 7, is created by the image generation unit, ‘38’ in figure 4, wherein the color converted foreground image ‘30A’ in figure 7, is the treatment instrument/tool which is superimposed on the background-endoscopic image (background image), ‘36A’ in figure 7 ). Regarding dependent claim 5, Usuda in view of Takemura teaches: Adjust brightness of each of the at least one treatment instrument based on a brightness distribution of the background image (See Usuda ¶ 64 wherein the brightness is adjusted (color conversion) of the treatment instrument (foreground image comprising treatment instrument ‘30A’ in figure 7) based on a brightness distribution of the background image (color of background-endoscopic image ‘36A’ in figure 7)). Regarding dependent claim 6, Usuda in view of Takemura teaches: A storage unit configured to store a learning-use model, wherein the processor is further configured to cause the learning-use model to learn the training image to generate a learning model that recognizes the treatment instrument within the endoscopic image (See Usuda ¶ 57, 71, 54, figure 1, 4, and 5, wherein a learning model (learning-use model) is stored in a storage unit, ROM ‘26’ in figure 1, executed by the CPU, ‘22’ in figure 1, causing the learning model to learn the training image (superimposed image) and recognize the treatment instrument/tool within the endoscopic image). Regarding dependent claim 9, Usuda in view of Takemura teaches: An endoscope configured to acquire the endoscopic image; (See Usuda ¶ 41 – 44, 225 – 227, figures 1, and 13 wherein an endoscope is configured to acquire an endoscopic image) and an image processing apparatus including a processor and a storage unit configured to store the learning model, (See Usuda ¶, 54, 71, and figure 1, 4 and 5, wherein a learning model (learning-use model) is stored in a storage unit, ROM ‘26’ in figure 1, executed by the CPU, ‘22’ in figure 1 as part of an image processing apparatus, ‘10’ in figure 1 and 4). input the endoscopic image to the learning model; and obtain, from the learning model, a recognition result with respect to the treatment instrument within the endoscopic image. (See Usuda ¶ 57, 71 wherein the learning model learns the training image (superimposed image) and recognizes the treatment instrument/tool within the endoscopic image). Regarding dependent claim 11, Usuda in view of Takemura teaches: A display device, wherein the processor of the image processing apparatus is further configured to display the recognition result on the display device. (See Usuda ¶ 145, 225 wherein the display control unit, ‘158’ in figure 15, is configured to display the recognition result on the display unit, ‘118’ in figure 15). Regarding independent claim 12, claim 12 is a method claim corresponding to claim 1. Please see the discussion of claim 1 above. Regarding dependent claim 13, Usuda in view of Takemura teaches: A computer readable non-transitory recording medium that stores a learning support program that causes a computer to perform the method for supporting learning according to claim 12. (See Usuda ¶ 42, 54 and claim 13 wherein a non-transitory recording medium in the form of ROM, ‘26’ in figure 1, stores the learning support program, in the form of the endoscopic image learning program, that causes a computer, CPU ‘22’ in figure 1, to perform the method of claim 12. Furthermore see the discussion of claim 12 above). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Usuda; Toshihiro et al. (US 20200285876 A1; hereinafter simply referred to as Usuda) in view of Takemura; Tomoaki et al. (US 20150051617 A1; hereinafter simply referred to as Takemura) and further in view of Komukai Makito et al. (JP 2012120702 A; hereinafter simply referred to as Komukai). Regarding dependent claim 4, Usuda in view of Takemura teaches: Correct brightness of each of the at least one treatment instrument within the foreground image (See Usuda ¶ 62, 64, and 88 wherein the brightness of the treatment instrument/tool is adjusted via the use of color conversion performed on the foreground image ‘30A’, in figure 7, necessarily based on the distance information as the foreground image is acquired based on the distance information, wherein the distance information is taught by Takemura ¶ 93 and 104 as explained above). Usuda does not explicitly disclose the correction of the brightness being based on spatial distribution of the luminance of illumination light of the endoscope. However, Komukai teaches of the brightness correction being based on spatial distribution of the luminance of illumination light of the endoscope (See ¶ 47, 50 wherein a close up observation mode is selected, correcting the brightness of the observation image, based on the spatial distribution of the luminance of illumination light of the endoscope, represented by the close up observation mode being selected when the endoscope light is not fully illuminating the observation field). As taught by Komukai the correction of the brightness being based on spatial distribution of the luminance of illumination light of the endoscope allows for the obtaining of an observation image that is suitable for diagnosis. (See ¶ 50 wherein an observation image suitable for diagnosis is obtained due to the brightness correction being based on spatial distribution of the luminance of illumination light of the endoscope). As both the teachings of Usuda in view of Takemura and Komukai deal with the technical field of image processing using endoscopes, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Usuda in view of Takemura with Komukai to teach of the correction of the brightness being based on spatial distribution of the luminance of illumination light of the endoscope in order for the obtaining of an observation image that is suitable for diagnosis. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Usuda; Toshihiro et al. (US 20200285876 A1; hereinafter simply referred to as Usuda) in view of Takemura; Tomoaki et al. (US 20150051617 A1; hereinafter simply referred to as Takemura) and further in view of Jin Yueming et al. (Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video; hereinafter simply referred to as Jin). Regarding dependent claim 8, Usuda in view of Takemura does not explicitly disclose: Generate a mask image obtained by extracting only a region of the at least one treatment instrument within the foreground image; and annotate the region of the at least one treatment instrument within the training image based on the mask image. However, Jin teaches of generate a mask image obtained by extracting only a region of the at least one treatment instrument within the foreground image; and annotate the region of the at least one treatment instrument within the training image based on the mask image. (See Jin Page 2 Paragraph 3, Page 3 Final Paragraph, Page 5 Paragraph 2 - 4, wherein a segmentation mask is used to obtain only the instrument (treatment instrument) within the foreground image (prior image) and is used to annotate the region of the instrument in the training image (current image)). As taught by Jin generating a mask image obtained by extracting only a region of the at least one treatment instrument within the foreground image; and annotating the region of the at least one treatment instrument within the training image based on the mask image allows for the less time consuming method of image segmentation that requires less annotations. (See Page 5 Paragraph 2, wherein the proposed Semi-Supervision learning method allows for a less time consuming and laborious medical instrument segmentation method). As both the teachings of Usuda in view of Takemura and Jin deal with the technical field of image processing of medical/treatment instruments, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Usuda in view of Takemura with Jin to teach of generate a mask image obtained by extracting only a region of the at least one treatment instrument within the foreground image; and annotate the region of the at least one treatment instrument within the training image based on the mask image in order for a less time consuming and laborious medical instrument segmentation method be conducted. Allowable Subject Matter Claims 7 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indications of allowable subject matter: Regarding clams 7, the reason of allowable subject matter is that the prior art fails to teach or reasonably suggest the limitations of claims 6 further comprising wherein a plurality of training images includes the training image, the processor is configured to: generate the plurality of training images that differ from each other in brightness; and cause the learning-use model to learn the plurality of training images to generate a plurality of learning models that correspond to different brightness of the endoscopic image. Regarding claim 10, the reason of all allowable subject matter is that the prior art fails to teach or reasonably suggest the limitations of claim 9 further comprising correct at least one of hue, saturation, or a rotation angle of the endoscopic image based on the training image used for formation of the learning model; and input the endoscopic image, which is corrected, to the learning model to recognize the treatment instrument within the endoscopic image. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEJANDRO HERNANDEZ whose telephone number is (703)756-1876. The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEJANDRO HERNANDEZ/ Examiner, Art Unit /JOHN VILLECCO/ Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597147
IMAGE CORRELATION PROCESSING BY ADDITION OF REAGGREGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12573013
REGION-OF-INTEREST (ROI)-BASED IMAGE ENHANCEMENT USING A RESIDUAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12573169
COMMON VIEW REGION IDENTIFICATION AND SCALE ALIGNMENT FOR FEATURE MATCHING IN IMAGE PAIRS
2y 5m to grant Granted Mar 10, 2026
Patent 12567268
AUTOMATED NANOSCOPY SYSTEM HAVING INTEGRATED ARTIFACT MINIMIZATION MODULES, INCLUDING EMBEDDED NANOMETER POSITION TRACKING BASED ON PHASOR ANALYSIS
2y 5m to grant Granted Mar 03, 2026
Patent 12555389
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR ESTIMATING ROAD EDGE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.7%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 37 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month