Prosecution Insights
Last updated: April 19, 2026
Application No. 18/385,532

IMAGE PROCESSING APPARATUS, ENDOSCOPE APPARATUS, AND IMAGE PROCESSING METHOD

Final Rejection §103
Filed
Oct 31, 2023
Examiner
LE, JOHNNY TRAN
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Olympus Medical Systems Corp.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+4.7% vs TC avg
Minimal -67% lift
Without
With
+-66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 1 This action is in response to the amendment filed on 11/07/2025. Claims 1-5 and 8-12 were amended. The specification was also amended to overcome an objection. Claims 1-12 remain rejected. Response to Arguments 2 Applicant’s arguments with respect to claim 1, 11, and 12 filed on 11/07/2025 that the amendments overcome the 35 § USC 103 rejection stating that the previous prior art does not disclose the following but not limited to “detect a position and a posture of the image sensor using the examination images acquired from the image sensor at the fixed cycle; based on the detected position and posture of the image sensor, obtain a position and a posture of a distal end portion of the endoscope;”. The argument has been considered but are moot due to an extension of the previous rejection listed below. 3 Regarding arguments to claims 2-10, they directly/indirectly depend on independent claim 1. Applicant does not argue anything other than independent claims 1, 11, and 12. The limitations in those claims, in conjunction with combination, was mostly previously established as explained, with a few minor changes. Claim Rejections - 35 USC § 103 4 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6 Claim(s) 1-8 and 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akimoto et al. (US 20160000307 A1) in view of Sakamoto et al. (US 20200301126 A1). 7 Regarding claim 1, Akimoto teaches an image processing apparatus, comprising a processor, ([0044] reciting “FIG. 1 is a configuration diagram showing a configuration of an endoscope system according to the present embodiment. FIG. 2 is a block diagram showing a configuration of an endoscope system 1. The endoscope system 1 includes an endoscope 2, a recording apparatus 3, a light source apparatus 4, a processor 5, a monitor 6, and a magnetic-field generating apparatus”) wherein the processor comprises hardware ([0059] reciting “Further, the processor 5 is also connected to the monitor 6. The monitor 6 has a PinP (picture in picture) function and can display, together with the organ model image pasted with the endoscopic image by the CPU 21, a live endoscopic image obtained by picking up an image with the image pickup device 11 of the endoscope 2.”), the processor being configured to: acquire examination images from an image sensor of an endoscope at a fixed cycle during observation of an inside of a subject ([Abstract] reciting “An endoscope system includes an image pickup device that is inserted into a subject and picks up an image inside the subject and a memory that records an intra-subject image acquired by the image pickup device and position information of the image pickup device in association with each other.”; [0005] reciting “The surgeon can perform an endoscopic examination viewing the displayed endoscopic image.”; [0060] reciting “The image capturing section 24 is a processing section that captures, at a fixed cycle, an image obtained in the processor 5.”; [0198] reciting “The operation signal of the changeover switch 5a is supplied from the processor 5 to the CPU 21 via the image capturing section 24. Such a changeover signal for the observation mode in picking up an image of an inside of a subject configures the predetermined trigger signal.”); generate an organ model from the examination images acquired from the image sensor at the fixed cycle ([0089] reciting “That is, the processing in S7 configures a part of an image generating section that generates, as an intra-subject pasted image, an image obtained by pasting an intra-subject image on a model image of a predetermined organ in which the position of the objective optical window 11a and the position in the coordinate system of the 3D model image are associated by S1 to S3 that configure the aligning section.”; [0060] reciting “The image capturing section 24 is a processing section that captures, at a fixed cycle, an image obtained in the processor 5.”; [0198] reciting “The operation signal of the changeover switch 5a is supplied from the processor 5 to the CPU 21 via the image capturing section 24. Such a changeover signal for the observation mode in picking up an image of an inside of a subject configures the predetermined trigger signal.”); detect a position and of the image sensor using the examination images acquired from the image sensor at the fixed cycle ([0061] reciting “The position/direction detecting section 25 controls the driving circuit 26, which drives the magnetic-field generating apparatus 7, causes the magnetic-field generating apparatus 7 to generate a predetermined magnetic field…”; [0060] reciting “The image capturing section 24 is a processing section that captures, at a fixed cycle, an image obtained in the processor 5.”; [0198] reciting “The operation signal of the changeover switch 5a is supplied from the processor 5 to the CPU 21 via the image capturing section 24. Such a changeover signal for the observation mode in picking up an image of an inside of a subject configures the predetermined trigger signal.”); based on the detected position and of the image sensor, obtain a position and of a distal end portion of the endoscope ([0063] reciting “Further, the CPU 21 has a stereo measurement function and has a function of measuring, from two frame images obtained by performing image pickup, distances to respective parts of a target region in the frame images. More specifically, the CPU 21 can acquire image pickup position information of the objective optical window 11a on the basis of position/direction information from the position/direction detecting section 25 at a time when the two frame images are picked up and calculate…”; [0092] reciting “Therefore, the CPU 21 derives position/direction information of the distal end portion 2d in a two-dimensional coordinate system from the reference information obtained in S3 and calculates…”); estimate a top, bottom and an azimuth of an image pickup visual field of the endoscope with respect to the organ model ([0077] reciting As shown in FIG. 5, the image pickup device 11 provided at the distal end portion 2d of the insertion section 2b picks up an endoscopic image at an angular field of view θ inside the bladder B.”; [0079] reciting “The 3D bladder model M1 is formed with an X.sub.2 axis set in an axis extending in a direction from a right wall to a left wall passing a center O of a sphere, a Y.sub.2 axis set in an axis extending in a direction from a neck to a top passing the center O of the sphere, and a Z.sub.2 axis set in an axis in a direction from a rear wall to a front wall passing the center O of the sphere.”); set a display direction of the organ model ([0106] reciting “The 2D-model-image display section 31 shown in FIG. 9 displays an image at a time when an endoscopic image picked up first when the distal end portion 2d enters inside the bladder B and faces a top direction is pasted onto the 2D model image 31a.”) based on the top, the bottom, and the azimuth of the image pickup visual field ([0077] reciting As shown in FIG. 5, the image pickup device 11 provided at the distal end portion 2d of the insertion section 2b picks up an endoscopic image at an angular field of view θ inside the bladder B.”; [0079] reciting “The 3D bladder model M1 is formed with an X.sub.2 axis set in an axis extending in a direction from a right wall to a left wall passing a center O of a sphere, a Y.sub.2 axis set in an axis extending in a direction from a neck to a top passing the center O of the sphere, and a Z.sub.2 axis set in an axis in a direction from a rear wall to a front wall passing the center O of the sphere.”); and and output the organ model to a monitor ([0049] reciting “The generated endoscopic image is outputted from the processor 5 to the monitor 6. A live endoscopic image is displayed on the monitor 6.”), 8 Akimoto does not explicitly teach to identify an unobserved region that is not observed by the endoscope in the organ model; detect a position and a posture of the image sensor using the examination images acquired from the image sensor at the fixed cycle; based on the detected position and posture of the image sensor, obtain a position and a posture of a distal end portion of the endoscope; … and output the organ model to a monitor the organ model being associated with the unobserved region identified. 9 Sakamoto teaches to identify an unobserved region that is not observed by the endoscope in the organ model; detect a position and a posture of the image sensor using the examination images acquired from the image sensor at the fixed cycle; based on the detected position and posture of the image sensor, obtain a position and a posture of a distal end portion of the endoscope; … and output the organ model to a monitor the organ model being associated with the unobserved region identified ([Abstract] reciting “The position is identified on the basis of a position of an imaging device and a posture of the imaging device.”; [0188] reciting “When it is determined that a region of interest or a general region has not been observed, the endoscope device 1 notifies a user of a cause of the determination that the region has not been observed (first function).”; [0189] reciting “When it is determined that a position on a 3D model has not been observed and the position is behind a visual field of the imaging device 28, the endoscope device 1 notifies a user that there is a region of interest or a general region that has not been observed (second function).”). 10 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Akimoto) to incorporate the teachings of Sakamoto to provide a method that can determine a posture alongside with a position, as well as an unobserved region with an endoscope for a 3D model, which in this case can be an organ model taught by Akimoto, who also teaches an endoscope to go in any direction like left, right, and up or go to the top, and can be done in a fixed cycle. Doing so would allow the endoscope to automatically record an inspection image as stated by Sakamoto ([0189] recited). 11 Regarding claim 2, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to match a top and bottom direction of the organ model with a top and bottom direction of an observation image of the endoscope (Akimoto; [0015] reciting “FIG. 3 is a flowchart for explaining an example of a flow of processing for pasting an endoscopic image to a bladder model image during observation inside a bladder according to the first embodiment of the present invention;”; [0073] reciting “An examination of the bladder B is performed in a state in which the patient lies on his or her back and a state in which the bladder B is filled with predetermined liquid (e.g., saline). For example, if the patient is an adult, there is no large difference in a size of the bladder B if any. The bladder B can be modeled in a spherical shape having a substantially same size.”). 12 Regarding claim 3, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to perform viewpoint direction control that rotates the organ model that is currently displayed in matching with a viewpoint of the endoscope (Akimoto; [0121] reciting “When vectors in a position and a direction of the position/direction detecting section 25 at a time when the insertion of the distal end portion 2d of the endoscope into the bladder B is detected are represented as P′.sub.θ and V′.sub.θ, the translation matrix M.sub.01 is calculated by the following Equation (6).”; [0122] reciting “The rotation matrix R.sub.01 is calculated to satisfy conditions explained below. FIG. 12 is a diagram for explaining direction vectors projected on the intermediate coordinate system (X.sub.1Y.sub.1Z.sub.1). The conditions satisfied by the rotation matrix R.sub.01 are that Z.sub.1 is parallel to a gravity direction and V′.sub.θ is projected on an X.sub.1Y.sub.1 plane perpendicular to Z.sub.1, the projected vector direction is represented as Y.sub.1, and a vector perpendicular to a Y.sub.1Z.sub.1 plane is represented as X.sub.1.”). 13 Regarding claim 4, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to display a photographing region on the organ model (Akimoto; [0104] reciting “The 2D-model-image display section 31 is a region where the 2D model image corresponding to the 2D model shown in FIG. 7 is displayed.”; [0105] reciting “The 3D-model-image display section 32 is a region where the 3D model image corresponding to the 3D model shown in FIG. 6 is displayed.”; [0112] reciting “As shown in FIG. 10, the plurality of endoscopic images 31b are included in the 2D-model-image display section 31. A region where the plurality of endoscopic images are pasted is a region observed by the examiner Therefore, the examiner can easily distinguish the region observed by the endoscope simply by glancing at the image shown in FIG. 10.”; [0200] reciting “However, the endoscope system 1 may have a PDD mode and the like for a photodynamic diagnostic method (PDD).”). 14 Regarding claim 5, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to display information on positional relation between the unobserved region and the endoscope (Sakamoto; [0053] reciting “The endoscope device 1 shown in FIG. 1 includes the insertion unit 2, a main body unit 3, an operation unit 4, and a display unit 5.”; [0101] reciting “The index A represents the distance from a subject to the tip end of the endoscope.”; [0194] The display control unit 182 may display information that represents a specific method of changing an imaging condition such that it is determined that the position on the 3D model has been observed on the display unit 5. For example, when the object distance is large and it is determined that the position on the 3D model has not been observed, the display control unit 182 may display a message “The present object distance is OO mm. Please perform observation such that the object distance becomes XX mm.” on the display unit 5. When the type of an optical adaptor is the cause of the determination that the position on the 3D model has not been observed, the display control unit 182 may display a message “Please change the type of the optical adaptor from direct view to side view.” on the display unit 5). 15 Regarding claim 6, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 5 (see claims 1 and 5 rejections above), wherein the information on positional relation is information indicating a route from a distal end position of the endoscope to the unobserved region (Sakamoto; [0053] reciting “The endoscope device 1 shown in FIG. 1 includes the insertion unit 2, a main body unit 3, an operation unit 4, and a display unit 5.”; [0101] reciting “The index A represents the distance from a subject to the tip end of the endoscope.”; [0194] The display control unit 182 may display information that represents a specific method of changing an imaging condition such that it is determined that the position on the 3D model has been observed on the display unit 5. For example, when the object distance is large and it is determined that the position on the 3D model has not been observed, the display control unit 182 may display a message “The present object distance is OO mm. Please perform observation such that the object distance becomes XX mm.” on the display unit 5. When the type of an optical adaptor is the cause of the determination that the position on the 3D model has not been observed, the display control unit 182 may display a message “Please change the type of the optical adaptor from direct view to side view.” on the display unit 5). 16 Regarding claim 7, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 5 (see claims 1 and 5 rejections above), wherein the information on positional relation is information indicating a distance from a distal end position of the endoscope to the unobserved region (Sakamoto; [0053] reciting “The endoscope device 1 shown in FIG. 1 includes the insertion unit 2, a main body unit 3, an operation unit 4, and a display unit 5.”; [0101] reciting “The index A represents the distance from a subject to the tip end of the endoscope.”; [0194] The display control unit 182 may display information that represents a specific method of changing an imaging condition such that it is determined that the position on the 3D model has been observed on the display unit 5. For example, when the object distance is large and it is determined that the position on the 3D model has not been observed, the display control unit 182 may display a message “The present object distance is OO mm. Please perform observation such that the object distance becomes XX mm.” on the display unit 5. When the type of an optical adaptor is the cause of the determination that the position on the 3D model has not been observed, the display control unit 182 may display a message “Please change the type of the optical adaptor from direct view to side view.” on the display unit 5). 17 Regarding claim 8, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to output, to a notification unit, endoscope operation information for a user to move the endoscope from a current position to the unobserved region (Sakamoto; [0099] reciting “In the first embodiment, a user designates a region of interest on a 3D model while the 3D model is displayed on the display unit 45. A user does not need to directly designate a region on a 3D model as a region of interest. A user may designate a region on a two-dimensional subject seen in an inspection moving image as a region of interest. Specifically, in Step S104, an inspection moving image is displayed on the display unit 45 instead of a 3D model. A user determines a region to be designated as a region of interest in a two-dimensional subject and inputs information that represents the region to the operation unit 44.”; [0142] reciting “When the determination unit 186 determines that the position identified in Step S206 has been observed, Step S213 is executed. When the determination unit 186 determines that the position identified in Step S206 has not been observed, the determination unit 186 reads an index and a threshold value at the position from the RAM 14 (Step S208). For example, when the position PO13 is identified in Step S206, the determination unit 186 reads an index and a threshold value that has been set to the position PO13 from the RAM 14. The position PO13 is included in the region R31 of interest corresponding to the region R1 of interest in FIG. 6.”). 18 Regarding claim 10, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor is configured to display the number of or area of the unobserved region or unobserved regions (Sakamoto; [0152] reciting “In this way, the display unit 5 notifies a user that the position identified in Step S206 has not been observed (Step S215).”; [0153] reciting As long as a user can distinguish between a region that has been observed and a region that has not been observed, any display method may be used.”). 19 Claims 11 and 12 has similar limitations as of Claim 1, therefore it is rejected under the same rationale as Claim 1. 20 Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akimoto et al. (US 20160000307 A1) in view of Sakamoto et al. (US 20200301126 A1) as of claim 1, further in view of Honda et al. (US 20040225223 A1). 21 Regarding claim 9, Akimoto in view of Sakamoto teach the image processing apparatus according to claim 1 (see claim 1 rejection above), but does not explicitly teach wherein the processor displays a name of an organ where the unobserved region is located. 22 Honda teaches wherein the processor displays a name of an organ where the unobserved region is located ([0059] reciting “The work station 5 has a processing function for performing a diagnosis based on images of organs or the like in a patient, taken by the capsule endoscope 10 by a doctor or a nurse.”; [0137] reciting “Therefore, the presence of images checked by a doctor are identified in the esophagus, the stomach, and the small intestine from the example in FIG. 21, and marks are displayed in association with the times at which the individual checked images have been taken, so that the doctor can easily confirm at which parts of the organs the checked images have been taken.”). 23 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Akimoto in view of Sakamoto) to incorporate the teachings of Honda to provide a method that can display the name of an organ in a certain region, and can utilize the unobserved regions taught by Akimoto in view of Sakamoto. Doing so would allow doctors to easily confirm at which parts of the organs the checked images have been taken as stated by Honda ([0137] recited). Conclusion 24 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 25 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHNNY T LE/ Examiner, Art Unit 2614 /KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Aug 21, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
0%
With Interview (-66.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month