Prosecution Insights
Last updated: April 19, 2026
Application No. 18/703,590

SYSTEMS AND INTERFACES FOR COMPUTER-BASED INTERNAL BODY STRUCTURE ASSESSMENT

Non-Final OA §102§103
Filed
Apr 22, 2024
Examiner
JOHNSON-CALDERON, FRANK J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Intuitive Surgical Operations, Inc.
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
77%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
127 granted / 222 resolved
-0.8% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
243
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
67.1%
+27.1% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 222 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 61, 67-68, 74-75, 80 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shelton, IV et al. (US 20230110791, hereinafter Shelton) Regarding claim 61, “A computer system comprising: at least one computer processor; and at least one memory, the at least one memory comprising instructions configured to cause the computer system to perform a method, the method comprising” Shelton teaches (¶0078) a surgical visualization system; (¶0114) having a control circuit 170 with a microcontroller that includes a processor 172 (e.g., a microprocessor or microcontroller) operably coupled to a memory 174. The processor 172 includes an instruction processing unit 176 and an arithmetic unit 178. The instruction processing unit 176 is configured to receive instructions from the memory 174. As to “causing a first graphical user interface element to be presented upon at least one display, the first graphical user interface element depicting a field of view of a surgical instrument within a patient interior body structure during a surgical procedure” Shelton teaches (¶0078) a surgical visualization system is configured to leverage “digital surgery” to obtain additional information about a patient’s anatomy and/or a surgical procedure; (¶0091) the surgical visualization system 100 includes an imaging system that includes the imaging device 120 configured to provide real-time views of the surgical site. The imaging device 120 can include, for example, a spectral camera (e.g., a hyperspectral camera, multispectral camera, or selective spectral camera), which is configured to detect reflected spectral waveforms and generate a spectral cube of images based on the molecular response to the different wavelengths. Views from the imaging device 120 can be provided in real time to a medical practitioner, such as on a display (e.g., a monitor, a computer tablet screen, etc.). As to “and causing a second graphical user interface element to be presented upon at least one display, the second graphical user interface element depicting a computer-generated three- dimensional model of at least a portion of the patient interior body structure, the three- dimensional model generated, at least in part, based upon images acquired with the surgical instrument.” Shelton teaches (¶0097) the imaging device 120 can include a right-side lens and a left-side lens used together to record two two-dimensional images at the same time and, thus, generate a three-dimensional (3D) image of the surgical site, render a three-dimensional image of the surgical site, and/or determine one or more distances at the surgical site; (¶0095, ¶0128, ¶0221) generating a 3D model; (¶0130) The video monitors 652 are configured to output the integrated/augmented views from the image overlay controller 610. A medical practitioner can select and/or toggle between different views on one or more displays. On a first display 652a, which is a monitor in this illustrated embodiment, the medical practitioner can toggle between (A) a view in which a three-dimensional rendering of the visible tissue is depicted Regarding claim 67, “The computer system of Claim 61, wherein the method further comprises: detecting at least one landmark in a visual image associated with a portion of the computer- generated three-dimensional model; determining an alignment of the computer-generated three-dimensional model with a synthetic model, using the at least one landmark; and presenting the computer-generated three-dimensional model in the second graphical user interface element in accordance with the determined alignment.” Shelton teaches (¶0272) Based on the determined relative orientations and the transmitted image data (e.g., of the first scene, the second scene, or both), the merged image can also illustrate not only the locations, but also the orientations of one or more of the endoscope 3102, the laparoscope 3104, the first surgical instrument 3114, the second surgical instrument 3118, and the tumor 3040. As discussed above, the means to create a completely generated 3D model of the instrument that can be overlaid into the image of the system which cannot see the alternative view. Since the representative depiction is a generated image, various properties of the image (e.g., the transparency, color) can also be manipulated to allow the system to be clearly shown as not within the real-time visualization video feed, but as a construct from the other view. If the user where to switch between imaging systems, the opposite view could also have the constructed instruments within its field of view. In some embodiments, there is another way to generate these overlays. The obstructed image could isolate the instruments in its stream from the surrounding anatomy, invert and align the image to the known common axis point and then merely overlay a live image of the obstructed view into the non-obstructed view camera display feed. Like the other representative depiction above, the alternative overlay could be shaded, semi-transparent, or otherwise modified to insure the user can tell the directly imaged view from the overlaid view in order to reduce confusion. This could be done with key aspects of the anatomy as well (e.g., the tumor that can be seen by one camera but not the other). The system could utilize the common reference between the cameras and display the landmark, point of interest, or key surgical anatomy aspect and even highlight it to allow for better approaches and interaction even from the occluded approach of the key aspect. Regarding claim 68, its rejection is similar to claim 61 Regarding claim 74, its rejection is similar to claim 67. Regarding claim 75, its rejection is similar to claim 61 Regarding claim 80, its rejection is similar to claim 67. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 62-64, 69-71, 76-78 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shelton in view of Karaoglu et al. (US 20220230303, hereinafter Karaoglu, as fully enabled by provisional application 63/138186.) Regarding claim 62, “The computer system of Claim 61, wherein the method further comprises: generating the three-dimensional model of the at least a portion of the patient interior body structure, wherein generating the three-dimensional model comprises: receiving a plurality of visual images, the plurality of visual images depicting fields of view of the surgical instrument within the patient interior body structure; determining a plurality of depth frames corresponding to the plurality of visual images….” Shelton teaches (¶0097) the imaging device 120 can include a right-side lens and a left-side lens used together to record two two-dimensional images at the same time and, thus, generate a three-dimensional (3D) image of the surgical site, render a three-dimensional image of the surgical site, and/or determine one or more distances at the surgical site; (¶0095, ¶0128, ¶0221) generating a 3D model; (¶0130) The video monitors 652 are configured to output the integrated/augmented views from the image overlay controller 610. A medical practitioner can select and/or toggle between different views on one or more displays. On a first display 652a, which is a monitor in this illustrated embodiment, the medical practitioner can toggle between (A) a view in which a three-dimensional rendering of the visible tissue is depicted; (¶0107-¶0108, ¶0140-¶0141, and ¶0094-¶0095) time-of-flight/structured light sensor(s) to determine and visualize depth. Shelton does not teach “using at least one machine learning architecture.” However, Karaoglu teaches (¶0037, ¶0040-¶0041) using machine learning to generate anatomical models based on source image input. As to “assembling the plurality of depth frames into a plurality of fragments; and integrating the fragments to create the three-dimensional model of the at least a portion of the patient interior body structure” Karaoglu teaches (¶0093-¶0094) the RGB-D sequence is split into chunks to build local geometric surfaces, referred to as the fragments. This process employs a pose-graph for each fragment for local alignment and generates point clouds; (¶0095) refining the pose graphs; (¶0096) integrating. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the surgical visualization system as taught by Shelton with the machine learning process and pose graph process as taught by Karaoglu for the benefit of creating high quality point clouds/3D representations of the surgical site. Regarding claim 63, “The computer system of Claim 62, wherein, each of the fragments comprises a corresponding keyframe of a plurality of keyframes, and wherein, integrating the fragments comprises determining a graph pose network based upon the plurality of keyframes.” Karaoglu further teaches (¶0093) the RGB-D sequence is split into chunks to build local geometric surfaces, referred to as the fragments. This process employs a pose-graph for each fragment for local alignment. The edges of the pose-graphs are formed by the transformation matrices estimated, optimizing a joint photometric and geometric energy function between the adjacent frames of the subsequences. Additionally, loop-closures are considered by using a 5-point RANSAC algorithm over ORB-based feature matching between the keyframes. Lastly, the pose-graphs are optimized using a robust non-linear optimization method, and the point clouds are generated. Regarding claim 64, “The computer system of Claim 63, wherein determining the graph pose network comprises: for each of the keyframes, generating a plurality of sets of features for a plurality of visual images captured with the surgical instrument; determining a plurality of poses for the plurality of visual images based upon correspondences between the sets of features; and determining reachability between two or more of the keyframes based, at least in part, upon the poses of the two or more of the keyframes.” Karaoglu further teaches (¶0093-¶0095) using a 5-point RANSAC algorithm over ORB-based feature matching between the keyframes; (¶0096) the local and the global pose graphs are combined to assign the poses of the RGB-D frames. Ultimately, each of them is integrated into a single truncated signed distance function (TSDF) volume to create the final mesh. Regarding claim 69, its rejection is similar to claim 62. Regarding claim 70, its rejection is similar to claim 63. Regarding claim 71, its rejection is similar to claim 64. Regarding claim 76, its rejection is similar to claim 62. Regarding claim 77, its rejection is similar to claim 63. Regarding claim 78, its rejection is similar to claim 64. Claim(s) 65, 72, 79 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shelton in view of Krimsky (US 20180235713.) Regarding claim 65, Shelton does not teach “The computer system of Claim 61, wherein the second graphical user interface element includes a position representation of the surgical instrument, the position representation of the surgical instrument oriented in agreement with the field of view of the surgical instrument depicted in the first graphical user interface element." However, Krimsky teaches (¶0006, ¶0014, ¶0019) determining a location of a tool based on an electromagnetic (EM) sensor included in the tool as the tool is navigated within the patient's chest, displaying a view of the 3D model showing the determined location of the tool. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the surgical visualization system as taught by Shelton the displaying of the tool location as taught by Krimsky for the benefit of helping guide the movement of the tool during the procedure. Regarding claim 72, its rejection is similar to claim 65. Regarding claim 79, its rejection is similar to claim 65. Claim(s) 66, 73 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shelton in view of Mercader et al. (US 11096584, hereinafter Mercader.) Regarding claim 66, Shelton does not teach “The computer system of Claim 61, wherein the method further comprises: detecting a hole in the computer-generated three-dimensional model; and generating a rendering of the computer-generated model with an indication of the hole.” However, Mercader teaches (15:42-57 Figs. 12A-12F, claim 1) 3D reconstruction of lesion depth was obtained from images by gathering gray scale from individual maps of fNADH using only 5 parallel lines across the lesion and plotting values using a 3D graphing program; The experimental results validate fNADH as an accurate measure of epicardial lesion size and as a predictor of lesion depth. 3D reconstruction of depth is possible by repeating the methods described above along multiple lines through the ablation image and compiling the results. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the surgical visualization system as taught by Shelton with the lesion depth determination as taught by Mercader for the benefit of getting more accurate knowledge of lesions/patient conditions, thereby improving outcomes and reducing costs (1:59-61.) Regarding claim 73, its rejection is similar to claim 66. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Verma et al. (US 20220086412) - Verma teaches (¶0043 and ¶0017) captured image for a new video frame is used for depth estimation, however, any captured image may be employed. Depth estimation is performed using a trained machine learning model on the identified image in this example and generates a depth value for each pixel in the captured image. Robb et al. (US 20110251454) - (¶0053 and Fig. 10) presentation of the visual image obtained during colonoscopy; (¶0028) generating 3D colon model; (¶0042-¶0043) align model based on landmark and surface fitting Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK J JOHNSON whose telephone number is (571)272-9629. The examiner can normally be reached 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian T. Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Frank Johnson/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §102, §103
Apr 13, 2026
Examiner Interview Summary
Apr 13, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597262
DETECTING AND IDENTIFYING OBJECTS REPRESENTED IN SENSOR DATA GENERATED BY MULTIPLE SENSOR SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12583386
METHOD FOR DETECTING TARGET PEDESTRIAN AROUND VEHICLE, METHOD FOR MOVING VEHICLE, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12575718
UNIVERSAL ENDOSCOPE ADAPTER
2y 5m to grant Granted Mar 17, 2026
Patent 12574588
Image Selection Using Motion Data
2y 5m to grant Granted Mar 10, 2026
Patent 12573219
DEVICE AND METHOD FOR COUNTING AND IDENTIFICATION OF BACTERIAL COLONIES USING HYPERSPECTRAL IMAGING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
77%
With Interview (+20.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 222 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month