Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,279

LAPAROSCOPIC IMAGE MANIPULATION METHOD AND SYSTEM AND COMPUTER PROGRAM

Final Rejection §103
Filed
Jul 15, 2024
Examiner
WELCH, DAVID T
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Olympus Winter & Ibe GmbH
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
247 granted / 303 resolved
+19.5% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
332
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 303 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 9 is objected to because of a minor informality: in the second and third lines, claim 9 recites “the at least one process” which should be amended to read --the at least one processor--. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5-10 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. (U.S. Patent Application Publication No. 2022/0366649), referred herein as Zang, in view of Popovic et al. (U.S. Patent Application Publication No. 2017/0007350), referred herein as Popovic. Regarding claim 1, Zang teaches a laparoscopic image manipulation method (figs 2 and 3), the method comprising: capturing a video stream of laparoscopic images of a patient using a laparoscope inserted into the patient during a laparoscopic procedure (paragraph 19, lines 1-7; paragraph 27; paragraph 39, lines 12-21; a laparoscopic video stream is captured of a patient during a procedure), feeding the captured laparoscopic images to a video processor configured to adding additional information as overlay over the captured laparoscopic images (paragraph 19, lines 1-7; paragraph 27; paragraph 28, lines 10-14; paragraph 32, lines 1-6; paragraph 39, lines 12-21; additional information is added to overlay the laparoscopic video stream), producing a composite image by rendering a representation of a 3D model of a target organ or structure at an arbitrary scale factor compared to the target organ or structure using a renderer and merging the rendered representation of the 3D model with the captured laparoscopic image to display the 3D model as overlaying organs visible in the laparoscopic image, and displaying the composite image on a monitor (paragraph 19, lines 1-7; paragraphs 27 and 31; paragraph 32, lines 1-6; paragraph 33, lines 1-11; paragraph 39, lines 12-37; a 3D model of a target organ/structure is rendered at an arbitrary scale and merged with the laparoscopic image to produce and display a composite image). Zang does not explicitly teach rendering the 3D model at a different scale than the target organ or structure, and does not explicitly teach displaying the 3D model as hovering above organs visible in the image. However, in a similar field of endeavor, Popovic teaches a method for capturing laparoscopic images of a patient, feeding those images to a processor to add additional information as overlay over the images, producing a composite image by rendering a representation of a 3D model of a target organ or structure, and displaying the composite image (figs 2, 5-8, and 10; paragraphs 33 and 34; paragraph 36; paragraph 37, lines 1-6; paragraph 59), and further comprising rendering the 3D model at a different scale than the target organ or structure, and displaying the 3D model as hovering above organs visible in the image (figs 5, 7, and 8; paragraph 41, lines 1-20; paragraph 44, lines 1-15; paragraphs 45, 48, and 50; paragraph 61, lines 1-4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the composite imaging with differently-scaled 3D model hovering above the target, as taught by Popovic, with the composite image of Zang because this helps to reveal portions of the organ structure that may not be visible in the image, and conveys important location and depth information about such structures, thereby increasing procedural safety and situational awareness of the user, which is particularly important in endoscopic and laparoscopic procedures (see, for example Popovic, paragraphs 24-26; paragraph 44, lines 1-10; paragraphs 52 and 59). Regarding claim 2, Zang in view of Popovic teaches the laparoscopic image manipulation method of claim 1, wherein at least one of an orientation, a size and a location of the rendering of the 3D model inside the composite image is controlled by a manual controller connected to the renderer (Zang, paragraph 28, lines 1-5; paragraph 33, lines 1-4 and the last 8 lines; paragraph 35, the last 10 lines; paragraph 39, lines 22-37; paragraph 52, the last 12 lines). Regarding claim 3, Zang in view of Popovic teaches the laparoscopic image manipulation method of claim 1, wherein information of each individual frame of the video stream of laparoscopic images are input to the renderer, the renderer is configured to render the representation of the 3D model according to the input information of the individual frames (Zang, paragraph 19, lines 1-7; paragraph 27, lines 1-5 and the last 3 lines; paragraph 33, lines 1-15; paragraph 37, lines 1-9; paragraph 39, lines 12-21). Regarding claim 5, Zang in view of Popovic teaches the laparoscopic image manipulation method of claim 1, wherein the 3D model of the target organ or structure is derived from one or more of prior CT and MRI scan data of the patient (Zang, paragraph 19, lines 1-7; paragraph 25, lines 1-3; paragraph 32, lines 4-8). Regarding claim 6, Zang teaches a laparoscopic image manipulation system (fig 1) comprising: a laparoscope (paragraph 27), at least one processor comprising hardware (paragraph 28, lines 1-5 and 10-14), and a monitor (paragraph 31), wherein the laparoscope is configured to: capture a video stream of laparoscopic images of a patient (paragraph 19, lines 1-7; paragraph 27; paragraph 39, lines 12-21); and feed the captured laparoscopic images to the at least one processor, the at least one processor is configured to: run a renderer to render a representation of a 3D model of a target object at an arbitrary scale factor compared to the target organ or structure (paragraph 19, lines 1-7; paragraph 27; paragraph 28, lines 10-14; paragraph 32, lines 1-6; paragraph 33, lines 1-11; paragraph 39, lines 12-37), and produce a composite image by merging the rendered representation of the 3D model with the captured laparoscopic image to display the 3D model as overlaying organs visible in the laparoscopic image, and the monitor is configured to display the composite image (paragraph 19, lines 1-7; paragraphs 27 and 31; paragraph 32, lines 1-6; paragraph 33, lines 1-11; paragraph 39, lines 12-21). Zang does not explicitly teach rendering the 3D model at a different scale than the target organ or structure, and does not explicitly teach displaying the 3D model as hovering above organs visible in the image. However, in a similar field of endeavor, Popovic teaches a method for capturing laparoscopic images of a patient, feeding those images to a processor to add additional information as overlay over the images, producing a composite image by rendering a representation of a 3D model of a target organ or structure, and displaying the composite image (figs 2, 5-8, and 10; paragraphs 33 and 34; paragraph 36; paragraph 37, lines 1-6; paragraph 59), and further comprising rendering the 3D model at a different scale than the target organ or structure, and displaying the 3D model as hovering above organs visible in the image (figs 5, 7, and 8; paragraph 41, lines 1-20; paragraph 44, lines 1-15; paragraphs 45, 48, and 50; paragraph 61, lines 1-4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the composite imaging with differently-scaled 3D model hovering above the target, as taught by Popovic, with the composite image of Zang because this helps to reveal portions of the organ structure that may not be visible in the image, and conveys important location and depth information about such structures, thereby increasing procedural safety and situational awareness of the user, which is particularly important in endoscopic and laparoscopic procedures (see, for example Popovic, paragraphs 24-26; paragraph 44, lines 1-10; paragraphs 52 and 59). Regarding claim 7, Zang in view of Popovic teaches the laparoscopic image manipulation system of claim 6, further comprising a manual controller having a data link to the at least one processor, (Zang, paragraph 28, lines 1-25; paragraph 30), the at least one processor is further configured to change at least one of an orientation, a size and a location of the rendering of the 3D model of the target organ or structure inside the composite image in response to signals from the manual controller (Zang, paragraph 28, lines 1-5; paragraph 33, lines 1-4 and the last 8 lines; paragraph 35, the last 10 lines; paragraph 39, lines 22-37; paragraph 52, the last 12 lines). Regarding claim 8, Zang in view of Popovic teaches the laparoscopic image manipulation system of claim 6, further comprising a frame grabber configured to capture the laparoscopic video stream frame-by-frame and to produce one or more composite images by merging of the rendered representation of the 3D model with the captured laparoscopic images frame-by-frame (Zang, paragraph 19, lines 1-7; paragraph 27, lines 1-5 and the last 3 lines; paragraph 33, lines 1-15; paragraph 37, lines 1-9; paragraph 39, lines 12-21). Regarding claim 9, Zang in view of Popovic teaches the laparoscopic image manipulation system of claim 6, wherein the at least one processor is further configured to: capture the video stream of the laparoscopic images of the patient using the laparoscope inserted into the patient during a laparoscopic procedure (Zang, paragraph 19, lines 1-7; paragraph 27; paragraph 39, lines 12-21), and feed the captured laparoscopic images to the at least one processor to merge the rendered representation of the 3D model with the captured laparoscopic images (Zang, paragraph 19, lines 1-7; paragraph 27; paragraph 28, lines 10-14; paragraph 32, lines 1-6; paragraph 39, lines 12-21). Regarding claim 10, the limitations of this claim substantially correspond to the limitations of claim 1 (except for the computer-readable medium, which is disclosed by Zang, paragraph 29); thus they are rejected on similar grounds. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Zang, in view of Popovic, and further in view of Zhao et al. (U.S. Patent Application Publication No. 2009/0088634), referred herein as Zhao. Regarding claim 4, Zang in view of Popovic teaches the laparoscopic image manipulation method of claim 3, wherein the information of each individual frame comprises one or more identified features of each frame (Zang, paragraph 37, lines 1-15). Although resolution and frame rate are inherent features of any video stream (including Zang’s laparoscopic video stream), Zang in view of Popovic does not explicitly teach inputting one or more of an image resolution and a frame rate. However, in a similar field of endeavor, Zhao teaches a laparoscopic image manipulation method comprising capturing a video stream of laparoscopic images of a patient and overlaying 3D models of structures with the video to produce and display a composite image by a renderer (paragraph 32; paragraph 39, the last 7 lines; paragraph 46, the last 7 lines; paragraph 191, the last 4 lines; paragraph 194, lines 1-18), wherein information for individual frames is input to the renderer, and wherein the information comprises one or more of an image resolution and a frame rate (paragraph 176, lines 1-5; paragraph 179, lines 1-5; paragraph 180, lines 1-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the image resolution input of Zhao with the frame processing of Zang in view of Popovic because the quality of the rendered images depends on the particular features of each frame, and taking into consideration each frame’s image resolution or frame rate has a direct impact on, and can help increase the quality of, the combined image that is ultimately produced (see, for example, Zhao, paragraph 174; paragraph 175, lines 1-8). Response to Arguments Applicant’s arguments with respect to the prior art rejections have been fully considered, but are moot in view of the new grounds of rejection presented above. It is agreed that Zang alone does not explicitly teach amended claim 1; but it respectfully submitted that Zang in view of Popovic teaches these limitations, as discussed above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID T WELCH whose telephone number is (571)270-5364. The examiner can normally be reached Monday-Thursday, 8:30-5:30 EST, and alternate Fridays, 9:00-2:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DAVID T. WELCH Primary Examiner Art Unit 2613 /DAVID T WELCH/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Dec 16, 2025
Non-Final Rejection — §103
Mar 10, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602742
IMAGE PROCESSING APPARATUS, BINARIZATION METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602842
TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS
2y 5m to grant Granted Apr 14, 2026
Patent 12592048
System and Method for Creating Anchors in Augmented or Mixed Reality
2y 5m to grant Granted Mar 31, 2026
Patent 12579734
METHOD FOR RENDERING VIEWPOINTS AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573119
APPARATUS AND METHOD FOR GENERATING SPEECH SYNTHESIS IMAGE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 303 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month