Prosecution Insights
Last updated: April 17, 2026
Application No. 17/325,106

VIRTUAL POINTER FOR REAL-TIME ENDOSCOPIC VIDEO USING GESTURE AND VOICE COMMANDS AND VIDEO ARCHITECTURE AND FRAMEWORK FOR COLLECTING SURGICAL VIDEO AT SCALE

Final Rejection §103§112
Filed
May 19, 2021
Examiner
NAJARIAN, LENA
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
6 (Final)
38%
Grant Probability
At Risk
7-8
OA Rounds
5y 0m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
178 granted / 464 resolved
-13.6% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
41 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
31.9%
-8.1% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
25.4%
-14.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 464 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant This communication is in response to the amendment filed 1/14/26. Claims 1, 5, 6, 9, 10, 14, 15, and 18-20 have been amended. Claims 1-20 are pending. Terminal Disclaimer The terminal disclaimer filed on 1/14/26 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of U.S. Patent No. 12,290,412 has been reviewed and is accepted. The terminal disclaimer has been recorded. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The newly added recitation of "wherein the user interface overlay is confined to image regions corresponding to the surgical tool,” ”selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool,” and “wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation" within claims 1, 10, and 19 appears to constitute new matter. In particular, Applicant does not point to, nor was the Examiner able to find support for this newly added language within the specification as originally filed. As such, Applicant is respectfully requested to clarify the above issues and to specifically point out support for the newly added limitations in the originally filed specification and claims. Applicant is required to cancel the new matter in the reply to this Office Action. Claim Objections Claims 1, 10, and 19 are objected to because of the following informalities: it is unclear whether or not the anatomical regions in the “tag anatomical regions” limitation are the same anatomical regions in the “anatomical regions obscured by the surgical tool” limitation, or if they are different regions. Appropriate correction is required. Claims 9 and 20 are objected to because of the following informalities: change “surgery” to “the surgery.” Appropriate correction is required. Claims 1, 10, and 19 are objected to because of the following informalities: the claims recite “said selected surgical tool.” However, there is no step of selecting a surgical tool prior to this limitation. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5, 10-12, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samadani et al. (WO 2018/195529 A1) in view Haider et al. (US 2014/0107471 A1), in view of McKinnon et al. (US 2020/0275976 A1), in view of Segev et al. (US 2021/0382559 A1), in view of Charron et al. (US 2018/0228555 A1), and further in view of Nygaard Espeland et al. (US 2022/0296081 A1). (A) Referring to claim 1, Samadani discloses A medical software tools system, comprising (para. 22-24 of Samadani): a surgical tool for sensing a surgeon’s hand movements in connection with surgical camera usage to invoke a video overlay displaying a synthetic visual path starting from an end of a selected instrument onward through an intended path of movement (para. 54, 55, 45, 41, 42, & 85 of Samadani; the system may use the AR sensor, for example, to track and capture a movement of a surgeon's hand(s) and instruments. The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Even though static representations of the instrument can be projected on to the images as well, at times, more flexible catheters and other instruments e.g., deep brain stimulator leads get bent while going through the brain parenchyma. The system may detect this bent inside the brain by using an ultrasound probe and superimpose it on to image, which may show to the surgeon the final path and location of the catheter or deep brain stimulator leads.); a computer system receiving an image stream from said surgical camera (para. 22 & 23 of Samadani; The AR sensor 106 is a device or a combination of devices that may record and detect changes in the environment, e.g., a surgery or procedure room 102 having a patient. These devices may include cameras (e.g. infrared, SLR, etc.), fiducials placed at known places, ultrasound probes, or any other device that is capable of three-dimensional (3D) scanning, to capture information and images in the environment. The AR sensor may also be an AR sensing device. The display 108 may display images captured from the AR sensor(s) or other sensors. The medical images can be prerecorded or can be continuously obtained in real time. The medical images may include a static image or a sequence of images over time (e.g. functional MRI).); said computer system providing a user interface overlay adapted for presentation over said image stream from said surgical camera and analyzing said surgical image stream from said surgical camera (para. 54-57, 38, & 45 of Samadani; The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Optionally, the gloves or the instruments may be coated with a material that is easier for AR sensing device to detect. This can in turn, allow the representation of the instruments or hands to be overlaid on to the image.). Samadani does not expressly disclose an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool; calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented; a database configured to index and store patient and surgeon data; a cloud capture server configured to receive and store said patient surgical video; use of a combination of audio keywords and said surgeon’s hand movements and eye movements to enable predictive surgical tool movements to assist said surgeon with a procedure of a patient; wherein said user interface is configured to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user, including said surgeon, to locate, measure and tag anatomical regions during surgery; said surgical tool further controlled by said surgeon wherein said user interface is configured to analyze said surgeon's eye movements; wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed; wherein the user interface overlay is confined to image regions corresponding to the surgical tool, selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool, wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation. Haider discloses an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool (para. 18-20 of Haider; the projector or a display on the OTT device output includes information visible to the user of the surgical tool to indicate the position, relative motion, orientation, or other navigation or guidance parameter related to the positioning of the active element of the surgical tool within the surgical field according to the surgical plan.); calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented (para. 191, 196, 219, & 228 of Haider; note the anticipating the movement); use of a combination of audio keywords and said surgeon’s hand movements to enable predictive surgical tool movements to assist said surgeon with a procedure of a patient (para. 18, 79, 212, 217, and 221 of Haider; The indicator can be any variety of means to align/locate the surgical path with the intended resection: a panel of lights that sign directions to correct the surgeon, a speaker with audio instructions, a screen, touchscreen or iPhone or iPad or iPod like device (i.e., a so-called "smartphone") on the OTT equipped tool displaying 3D representation of the tool and the patient with added guidance imagery or a digital projection (e.g., by a picoprojector) onto the patient's anatomy of the appropriate location of a resection. The indicator serves to provide an appropriate OTT CAS output to guide the surgeon to make the right resection based on real time (or semi-real time) information.). McKinnon discloses a database configured to index and store patient and surgeon data (see Fig. 2C’s cloud-based system and para. 146, 150, 208, and 216 of McKinnon; the various datasets are indexed in the database or other storage medium in a manner that allows for rapid retrieval of relevant information during the surgical procedure. For example, in one embodiment, a patient-centric set of indices may be used so that data pertaining to a particular patient or a set of patients similar to a particular patient can be readily extracted. This concept can be similarly applied to surgeons, implant characteristics, CASS component versions, etc. FIG. 2C illustrates a “cloud-based” implementation in which the Surgical Computer 150 is connected to a Surgical Data Server 180 via a Network 175. This Network 175 may be, for example, a private intranet or the Internet. In addition to the data from the Surgical Computer 150, other sources can transfer relevant data to the Surgical Data Server 180. The example of FIG. 2C shows 3 additional data sources: the Patient 160, Healthcare Professional(s) 165, and an EMR Database 170. Thus, the Patient 160 can send pre-operative and post-operative data to the Surgical Data Server 180, for example, using a mobile app. Prior to surgery, the Patient Data 810, 815 and Healthcare Professional Data 825 may be captured and stored in a cloud-based or online database (e.g., the Surgical Data Server 180 shown in FIG. 2C).); and a cloud capture server configured to receive and store said patient surgical video (para. 146, 216, and 287 of McKinnon; the Patient Data 810, 815 and Healthcare Professional Data 825 may be captured and stored in a cloud-based or online database (e.g., the Surgical Data Server 180 shown in FIG. 2C). Playback of the video can be performed after the procedure or the video could be called up on an HMD display during the procedure to review one or more steps. This may be a valuable teaching tool for residents or for a surgeon wishing to see when a certain cut or step was undertaken. Playback may be a useful tool to create a change of surgical plan during the procedure based on events during the procedure.). Segev discloses use of said surgeon’s eye movements to enable predictive surgical tool movements (para. 390 & 434 of Segev; Eye tracker components 136 (FIG. 1C) tracks the gaze of surgeon 120 to detect the designation and the area selected. In some embodiments, system 100 continuously performs automatic auto-focus on designated areas based on tracking the eye motion of surgeon 120. In procedure 910, an eye motion by the surgeon is detected, and the eye motion is applied to control the head mounted display system. With reference to FIG. 1C, eye tracking components 136 track an eye motion of surgeon 120 and provide the eye motion to computer 118 (FIG. 1B) via transceivers 102B and 118B.); wherein said user interface is configured to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user, including said surgeon, to locate, measure and tag anatomical regions during surgery (Fig. 4A, para. 121, 124, 178, 353-357, 410, 438, & 217-229 of Segev; UI 160 allows storing snapshots and videos and recording audio for subsequent use. Grid 810 shows a numerical depiction of the image acquired by camera 140A, displayed to the left eye of surgeon 120 and grid 812 shows a numerical depiction of the image acquired by camera 140B, displayed to the right eye of surgeon 120. When drawing on the real-time image, the drawings (lines, symbols, etc.) may be locked to the anatomy of the patient, i.e. the overlaid symbol will automatically/dynamically change its position so it appears locked to the patient's anatomy (e.g. when the surgeon moves the patient's eye). The drawing may be in 3D.); said surgical tool further controlled by said surgeon wherein said user interface is configured to analyze said surgeon's eye movements (para. 33, 44, 45, & 108 of Segev; the user interface further comprises an eye tracker configured to detect an eye motion by the surgeon, wherein the input further comprises the eye motion. The method further comprises detecting an eye motion by the surgeon, and applying the eye motion to control the head mounted display system. Eye tracking components 136 acquire eye motion data of surgeon 120 and transmit the eye motion data to eye tracker controller 118L via respective transceivers 102B and 118B.); wherein the user interface overlay is confined to image regions corresponding to the surgical tool, selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool (para. 229 and 134 of Segev; The virtual tool appears in the display as a 3D tool that the users see as an overlay on the live image via the HMD. This is based on 3D models of tools that are available to the user to choose from via the menu/GUI in this mode. UI 160 allows storing snapshots and videos and recording audio for subsequent use. The audio recordings may be converted to text to use as notes, operational summaries, naming and tagging files. The keywords tags may be subsequently applied by a deep learning algorithm. In some embodiments, system 100 supports an automatic generation of a surgical report, based on pre-determined templates having placeholders for pre-op data, snapshots from the live image, voice-to-text of recorded notes / summaries. The automatic report generation supports adding identifying data, such as the name of patient 122, name of surgeon 120, date, time and duration of the surgical procedure, and the like. Where suitable, machine learning and image processing may be applied to acquire data related to the procedure. For example, the type of surgical instruments used, and the image feed of the surgical procedure may be mined to add data to the report. UI 160 provides a “send-to-report” menu-item allowing the surgeon upload selected snapshots and pre-op images to the report.). Charron discloses wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed (para. 49 & 31 and see Fig. 6 of Charron; the second surgical instrument in FIGS. 1 and 2 has overlaid on its shaft 109 a measurement of the distance of the instrument to the surface of the patient, which changes as the surgeon 701 manipulates the instrument. This may be, for example, the minimum distance from the distal end (i.e. the end of the tool furthest from the surgeon's hands, which is normally an end of the tip) to the surface of the patient's body.). Nygaard Espeland discloses wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation (para. 96-98 & 86-88 of Nygaard Espeland; the latency is addressed by the video compositor overlaying the results of the processing over the most recent frames of the endoscopic video feed. This leads to a few frames misalignment between the processing and the endoscopic video feed, which is usually acceptable but may lead to mismatches if there is very fast movement in the endoscopic video feed. The video compositor buffers the endoscopic video feed until the processing system 440 has processed the received sequence of frames with the steps a) to d) before overlaying the results of said processing over the endoscopic video feed. This may lead to a latency of more than 100 ms, e.g. 150 ms or 200 ms, but ensures that the overlay matches in a situation where there is very fast movement in the endoscopic video.). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Haider, McKinnon, Segev, Charron, and Nygaard Espeland within Samadani’s surgical navigation system. The motivation for doing so would have been to provide feedback during a procedure to improve either the efficiency or quality, or both (abstract of Haider), so that data can be readily extracted (para. 150 of McKinnon), to provide the surgeon with a broad range of functionalities, to allow the surgeon to interface naturally with the system and control the various features, allowing the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev), to assist a surgeon by presenting virtually augmented views of the portion of a patient being operated on and surgical instruments being used to perform the surgery (para. 1 of Charron), and for real-time detection (para. 96 of Nygaard Espeland). (B) Referring to claim 2 and similar claim 11, Samadani, Haider, and McKinnon do not expressly disclose wherein a foot pedal is utilized to enable surgeon visual displays and said surgeon's eye movements is used to direct assistance from the system by way of computer monitoring of said surgeon's eye movements. Segev discloses wherein a foot pedal is utilized to enable surgeon visual displays and said surgeon's eye movements is used to direct assistance from the system by way of computer monitoring of said surgeon's eye movements (para. 45, 91, 97-99, 174, and 434 of Segev). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Segev within Samadani, Haider, and McKinnon. The motivation for doing so would have been to allow the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev). (C) Referring to claim 3, Samadani discloses wherein said surgeon's movement is monitored using a virtual pointing device, wherein said virtual pointing device is enabled by said computer system by accessing patient data corresponding with organ placement within said patient (abstract, para. 6, 7, 13, 14, and 83 of Samadani). Samadani, Haider, and McKinnon do not disclose wherein said surgeon's eye movement is monitored. Segev discloses wherein said surgeon's eye movement is monitored (para. 33, 97, and 108 of Segev). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Segev within Samadani, Haider, and McKinnon. The motivation for doing so would have been to allow the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev). (D) Referring to claim 12, Samadani discloses wherein a surgical tool selected is a virtual pointing device and is enabled by said computer system by accessing patient data corresponding with organ placement within said patient (abstract, para. 6, 7, 13, 14, and 83 of Samadani). (E) Referring to claim 10, Samadani discloses A method of using medical software tools system, comprising (para. 22-24 of Samadani): sensing a surgeon’s hand movements using a surgical tool in connection with a surgical camera to invoke a video overlay displaying a synthetic visual path starting from an end of a selected instrument onward through an intended path of movement (para. 54, 55, 45, 41, 42, & 85 of Samadani; the system may use the AR sensor, for example, to track and capture a movement of a surgeon's hand(s) and instruments. The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Even though static representations of the instrument can be projected on to the images as well, at times, more flexible catheters and other instruments e.g., deep brain stimulator leads get bent while going through the brain parenchyma. The system may detect this bent inside the brain by using an ultrasound probe and superimpose it on to image, which may show to the surgeon the final path and location of the catheter or deep brain stimulator leads.); receiving an image stream through a computer system from said surgical camera (para. 22 & 23 of Samadani; The AR sensor 106 is a device or a combination of devices that may record and detect changes in the environment, e.g., a surgery or procedure room 102 having a patient. These devices may include cameras (e.g. infrared, SLR, etc.), fiducials placed at known places, ultrasound probes, or any other device that is capable of three-dimensional (3D) scanning, to capture information and images in the environment. The AR sensor may also be an AR sensing device. The display 108 may display images captured from the AR sensor(s) or other sensors. The medical images can be prerecorded or can be continuously obtained in real time. The medical images may include a static image or a sequence of images over time (e.g. functional MRI).); providing a user interface overlay adapted for presentation over said image stream from said surgical camera through said computer system and analyzing said image stream from said surgical camera (para. 54-57, 38, & 45 of Samadani; The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Optionally, the gloves or the instruments may be coated with a material that is easier for AR sensing device to detect. This can in turn, allow the representation of the instruments or hands to be overlaid on to the image.). Samadani does not expressly disclose an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool; calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented; indexing and storing patient and surgical data on a database; configuring said database to store said patient and surgical data in real time; combining audio keywords and said surgeon’s hand movements to enable predictive surgical tool movements to assist said surgeon; configuring said user interface to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user to locate, measure and tag anatomical regions during surgery; analyzing, via said user interface, said surgeon's eve movements, wherein said surgical tool is further controlled by said surgeon; configuring a cloud capture server to receive and store said patient surgical video; wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed; wherein the user interface overlay is confined to image regions corresponding to the surgical tool, and selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool, wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation. Haider discloses an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool (para. 18-20 of Haider; the projector or a display on the OTT device output includes information visible to the user of the surgical tool to indicate the position, relative motion, orientation, or other navigation or guidance parameter related to the positioning of the active element of the surgical tool within the surgical field according to the surgical plan.); calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented (para. 191, 196, 219, & 228 of Haider; note the anticipating the movement); combining audio keywords and said surgeon’s hand movements to enable predictive surgical tool movements to assist said surgeon (para. 18, 79, 212, 217, and 221 of Haider; The indicator can be any variety of means to align/locate the surgical path with the intended resection: a panel of lights that sign directions to correct the surgeon, a speaker with audio instructions, a screen, touchscreen or iPhone or iPad or iPod like device (i.e., a so-called "smartphone") on the OTT equipped tool displaying 3D representation of the tool and the patient with added guidance imagery or a digital projection (e.g., by a picoprojector) onto the patient's anatomy of the appropriate location of a resection. The indicator serves to provide an appropriate OTT CAS output to guide the surgeon to make the right resection based on real time (or semi-real time) information.). McKinnon discloses indexing and storing patient and surgical data on a database and configuring said database to store said patient and surgical data in real time (see Fig. 2C’s cloud-based system and para. 122, 146, 150, 208, and 216 of McKinnon; the various datasets are indexed in the database or other storage medium in a manner that allows for rapid retrieval of relevant information during the surgical procedure. For example, in one embodiment, a patient-centric set of indices may be used so that data pertaining to a particular patient or a set of patients similar to a particular patient can be readily extracted. This concept can be similarly applied to surgeons, implant characteristics, CASS component versions, etc. FIG. 2C illustrates a “cloud-based” implementation in which the Surgical Computer 150 is connected to a Surgical Data Server 180 via a Network 175. This Network 175 may be, for example, a private intranet or the Internet. In addition to the data from the Surgical Computer 150, other sources can transfer relevant data to the Surgical Data Server 180. The example of FIG. 2C shows 3 additional data sources: the Patient 160, Healthcare Professional(s) 165, and an EMR Database 170. Thus, the Patient 160 can send pre-operative and post-operative data to the Surgical Data Server 180, for example, using a mobile app. Prior to surgery, the Patient Data 810, 815 and Healthcare Professional Data 825 may be captured and stored in a cloud-based or online database (e.g., the Surgical Data Server 180 shown in FIG. 2C). The surgical plan can be viewed as dynamically changing in real-time or near real-time as new data is collected by the components of the CASS 100.); configuring a cloud capture server to receive and store said patient surgical video (para. 146, 216, and 287 of McKinnon; the Patient Data 810, 815 and Healthcare Professional Data 825 may be captured and stored in a cloud-based or online database (e.g., the Surgical Data Server 180 shown in FIG. 2C). Playback of the video can be performed after the procedure or the video could be called up on an HMD display during the procedure to review one or more steps. This may be a valuable teaching tool for residents or for a surgeon wishing to see when a certain cut or step was undertaken. Playback may be a useful tool to create a change of surgical plan during the procedure based on events during the procedure.). Segev discloses configuring said user interface to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user to locate, measure and tag anatomical regions during surgery (Fig. 4A, para. 121, 124, 178, 353-357, 410, 438 & 217-229 of Segev; UI 160 allows storing snapshots and videos and recording audio for subsequent use. Grid 810 shows a numerical depiction of the image acquired by camera 140A, displayed to the left eye of surgeon 120 and grid 812 shows a numerical depiction of the image acquired by camera 140B, displayed to the right eye of surgeon 120. When drawing on the real-time image, the drawings (lines, symbols, etc.) may be locked to the anatomy of the patient, i.e. the overlaid symbol will automatically/dynamically change its position so it appears locked to the patient's anatomy (e.g. when the surgeon moves the patient's eye). The drawing may be in 3D.); analyzing, via said user interface, said surgeon's eve movements, wherein said surgical tool is further controlled by said surgeon (para. 33, 44, 45, & 108 of Segev; the user interface further comprises an eye tracker configured to detect an eye motion by the surgeon, wherein the input further comprises the eye motion. The method further comprises detecting an eye motion by the surgeon, and applying the eye motion to control the head mounted display system. Eye tracking components 136 acquire eye motion data of surgeon 120 and transmit the eye motion data to eye tracker controller 118L via respective transceivers 102B and 118B.); wherein the user interface overlay is confined to image regions corresponding to the surgical tool, and selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool (para. 229 and 134 of Segev; The virtual tool appears in the display as a 3D tool that the users see as an overlay on the live image via the HMD. This is based on 3D models of tools that are available to the user to choose from via the menu/GUI in this mode. UI 160 allows storing snapshots and videos and recording audio for subsequent use. The audio recordings may be converted to text to use as notes, operational summaries, naming and tagging files. The keywords tags may be subsequently applied by a deep learning algorithm. In some embodiments, system 100 supports an automatic generation of a surgical report, based on pre-determined templates having placeholders for pre-op data, snapshots from the live image, voice-to-text of recorded notes / summaries. The automatic report generation supports adding identifying data, such as the name of patient 122, name of surgeon 120, date, time and duration of the surgical procedure, and the like. Where suitable, machine learning and image processing may be applied to acquire data related to the procedure. For example, the type of surgical instruments used, and the image feed of the surgical procedure may be mined to add data to the report. UI 160 provides a “send-to-report” menu-item allowing the surgeon upload selected snapshots and pre-op images to the report.). Charron discloses wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed (para. 49 & 31 and see Fig. 6 of Charron; the second surgical instrument in FIGS. 1 and 2 has overlaid on its shaft 109 a measurement of the distance of the instrument to the surface of the patient, which changes as the surgeon 701 manipulates the instrument. This may be, for example, the minimum distance from the distal end (i.e. the end of the tool furthest from the surgeon's hands, which is normally an end of the tip) to the surface of the patient's body.). Nygaard Espeland discloses wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation (para. 96-98 & 86-88 of Nygaard Espeland; the latency is addressed by the video compositor overlaying the results of the processing over the most recent frames of the endoscopic video feed. This leads to a few frames misalignment between the processing and the endoscopic video feed, which is usually acceptable but may lead to mismatches if there is very fast movement in the endoscopic video feed. The video compositor buffers the endoscopic video feed until the processing system 440 has processed the received sequence of frames with the steps a) to d) before overlaying the results of said processing over the endoscopic video feed. This may lead to a latency of more than 100 ms, e.g. 150 ms or 200 ms, but ensures that the overlay matches in a situation where there is very fast movement in the endoscopic video.). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Haider, McKinnon, Segev, Charron, and Nygaard Espeland within Samadani’s surgical navigation system. The motivation for doing so would have been to provide feedback during a procedure to improve either the efficiency or quality, or both (abstract of Haider), so that data can be readily extracted (para. 150 of McKinnon), to provide the surgeon with a broad range of functionalities, to allow the surgeon to interface naturally with the system and control the various features, allowing the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev), to assist a surgeon by presenting virtually augmented views of the portion of a patient being operated on and surgical instruments being used to perform the surgery (para. 1 of Charron), and for real-time detection (para. 96 of Nygaard Espeland). (F) Referring to claims 5 and 14, Samadani and Haider do not disclose wherein said computer system determines whether to use edge detection algorithms for resolving said specified distance and direction of the synthetic visual dotted line, and wherein said computer system uses an artificial intelligence function trained with visual observations of said surgical tool. McKinnon discloses wherein said computer system determines whether to use edge detection algorithms for resolving said specified distance and direction of the synthetic visual dotted line, and wherein said computer system uses an artificial intelligence function trained with visual observations of said surgical tool (Fig. 37, para. 77, 11, 121, 181, 333, 217, 219, 220, 194, 195, and 319 of McKinnon). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of McKinnon within Samadani and Haider. The motivation for doing so would have been to optimize surgical procedures (para. 11 of McKinnon). Claim(s) 6-8, 15-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samadani et al. (WO 2018/195529 A1) in view Haider et al. (US 2014/0107471 A1), in view of McKinnon et al. (US 2020/0275976 A1), in view of Segev et al. (US 2021/0382559 A1), in view of Charron et al. (US 2018/0228555 A1), in view of Nygaard Espeland et al. (US 2022/0296081 A1), and further in view of Ingle (US 2019/0238791 A1). (A) Referring to claims 6 and 15, Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland do not disclose wherein a cloud network collects surgical images on an enterprise scale to enable hospitals to ingest, manage, and fully utilize said patient surgical video within a hospital network and to share said patient surgical video with designated users and provide a live stream of said patient surgical video from the surgery, and wherein said live stream supplies key clips for evaluation of hospital resources stored in said database for integration with patient electronic health records. Ingle discloses wherein a cloud network collects surgical images on an enterprise scale to enable hospitals to ingest, manage, and fully utilize said patient surgical video within a hospital network and to share said patient surgical video with designated users and provide a live stream of said patient surgical video from the surgery, and wherein said live stream supplies key clips for evaluation of hospital resources stored in said database for integration with patient electronic health records (para. 5, 8, 30, 35, 41, 79, 82, and 88-90 of Ingle). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Ingle within Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland. The motivation for doing so would have been to reduce paperwork required to associate the captured images to a patient (para. 7 of Ingle). (B) Referring to claim 19, Samadani discloses A medical software tools system, comprising (para. 22-24 of Samadani): a surgical tool for sensing a surgeon’s hand movements in connection with surgical camera usage to invoke a video overlay displaying a synthetic visual path starting from an end of a selected instrument onward through an intended path of movement (para. 54, 55, 45, 41, 42, & 85 of Samadani; the system may use the AR sensor, for example, to track and capture a movement of a surgeon's hand(s) and instruments. The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Even though static representations of the instrument can be projected on to the images as well, at times, more flexible catheters and other instruments e.g., deep brain stimulator leads get bent while going through the brain parenchyma. The system may detect this bent inside the brain by using an ultrasound probe and superimpose it on to image, which may show to the surgeon the final path and location of the catheter or deep brain stimulator leads.); a computer system receiving an image stream from said surgical camera (para. 22 & 23 of Samadani; The AR sensor 106 is a device or a combination of devices that may record and detect changes in the environment, e.g., a surgery or procedure room 102 having a patient. These devices may include cameras (e.g. infrared, SLR, etc.), fiducials placed at known places, ultrasound probes, or any other device that is capable of three-dimensional (3D) scanning, to capture information and images in the environment. The AR sensor may also be an AR sensing device. The display 108 may display images captured from the AR sensor(s) or other sensors. The medical images can be prerecorded or can be continuously obtained in real time. The medical images may include a static image or a sequence of images over time (e.g. functional MRI).; said computer system providing a user interface overlay adapted for presentation over said image stream from said surgical camera and analyzing said surgical image stream from said surgical camera (para. 54-57, 38, & 45 of Samadani; The system may track the location of the surgeon' s hands and instruments and overlay them to the images and holograms. Optionally, the gloves or the instruments may be coated with a material that is easier for AR sensing device to detect. This can in turn, allow the representation of the instruments or hands to be overlaid on to the image.). Samadani does not expressly disclose an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool; calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented; use of a combination of audio keywords and said surgeon’s hand movements and eye movements to enable predictive surgical tool movements to assist said surgeon; a database for indexing and storing patient and surgical data; a cloud capture server on a cloud network to collect surgical images on an enterprise scale to enable hospitals to ingest, manage, and fully utilize patient surgical video within a secure hospital network and to share said patient surgical video with designated users and provide a live stream of said patient surgical video from the surgery, and wherein said live stream supplies key clips for evaluation of hospital resources stored in said database for integration with patient electronic health records; said user interface configured to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user, including said surgeon, to locate, measure and tag anatomical regions during surgery; said surgical tool further controlled by said surgeon wherein said user interface is configured to analyze said surgeon’s eye movements; wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed; wherein the user interface overlay is confined to image regions corresponding to the surgical tool, and selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool; wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation. Haider discloses an intended path of movement in a specified distance and direction corresponding to an orientation of said surgical tool (para. 18-20 of Haider; the projector or a display on the OTT device output includes information visible to the user of the surgical tool to indicate the position, relative motion, orientation, or other navigation or guidance parameter related to the positioning of the active element of the surgical tool within the surgical field according to the surgical plan.); calculating an anticipated direction of movement corresponding to said direction said surgical tool is oriented (para. 191, 196, 219, & 228 of Haider; note the anticipating the movement); use of a combination of audio keywords and said surgeon’s hand movements to enable predictive surgical tool movements to assist said surgeon (para. 18, 79, 212, 217, and 221 of Haider; The indicator can be any variety of means to align/locate the surgical path with the intended resection: a panel of lights that sign directions to correct the surgeon, a speaker with audio instructions, a screen, touchscreen or iPhone or iPad or iPod like device (i.e., a so-called "smartphone") on the OTT equipped tool displaying 3D representation of the tool and the patient with added guidance imagery or a digital projection (e.g., by a picoprojector) onto the patient's anatomy of the appropriate location of a resection. The indicator serves to provide an appropriate OTT CAS output to guide the surgeon to make the right resection based on real time (or semi-real time) information.). McKinnon discloses a database for indexing and storing patient and surgical data (para. 150 of McKinnon; the various datasets are indexed in the database or other storage medium in a manner that allows for rapid retrieval of relevant information during the surgical procedure. For example, in one embodiment, a patient-centric set of indices may be used so that data pertaining to a particular patient or a set of patients similar to a particular patient can be readily extracted. This concept can be similarly applied to surgeons, implant characteristics, CASS component versions, etc.). Segev discloses use of said surgeon’s eye movements to enable predictive surgical tool movements (para. 390 & 434 of Segev; Eye tracker components 136 (FIG. 1C) tracks the gaze of surgeon 120 to detect the designation and the area selected. In some embodiments, system 100 continuously performs automatic auto-focus on designated areas based on tracking the eye motion of surgeon 120. In procedure 910, an eye motion by the surgeon is detected, and the eye motion is applied to control the head mounted display system. With reference to FIG. 1C, eye tracking components 136 track an eye motion of surgeon 120 and provide the eye motion to computer 118 (FIG. 1B) via transceivers 102B and 118B.); said user interface configured to capture patient surgical video, take snapshots for future viewing, and overlay a grid to enable a user, including said surgeon, to locate, measure and tag anatomical regions during surgery (Fig. 4A, para. 121, 124, 178, 353-357, 410, 438, & 217-229 of Segev; UI 160 allows storing snapshots and videos and recording audio for subsequent use. Grid 810 shows a numerical depiction of the image acquired by camera 140A, displayed to the left eye of surgeon 120 and grid 812 shows a numerical depiction of the image acquired by camera 140B, displayed to the right eye of surgeon 120. When drawing on the real-time image, the drawings (lines, symbols, etc.) may be locked to the anatomy of the patient, i.e. the overlaid symbol will automatically/dynamically change its position so it appears locked to the patient's anatomy (e.g. when the surgeon moves the patient's eye). The drawing may be in 3D.); said surgical tool further controlled by said surgeon wherein said user interface is configured to analyze said surgeon’s eye movements (para. 33, 44, 45, & 108 of Segev; the user interface further comprises an eye tracker configured to detect an eye motion by the surgeon, wherein the input further comprises the eye motion. The method further comprises detecting an eye motion by the surgeon, and applying the eye motion to control the head mounted display system. Eye tracking components 136 acquire eye motion data of surgeon 120 and transmit the eye motion data to eye tracker controller 118L via respective transceivers 102B and 118B.); wherein the user interface overlay is confined to image regions corresponding to the surgical tool, and selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool (para. 229 and 134 of Segev; The virtual tool appears in the display as a 3D tool that the users see as an overlay on the live image via the HMD. This is based on 3D models of tools that are available to the user to choose from via the menu/GUI in this mode. UI 160 allows storing snapshots and videos and recording audio for subsequent use. The audio recordings may be converted to text to use as notes, operational summaries, naming and tagging files. The keywords tags may be subsequently applied by a deep learning algorithm. In some embodiments, system 100 supports an automatic generation of a surgical report, based on pre-determined templates having placeholders for pre-op data, snapshots from the live image, voice-to-text of recorded notes / summaries. The automatic report generation supports adding identifying data, such as the name of patient 122, name of surgeon 120, date, time and duration of the surgical procedure, and the like. Where suitable, machine learning and image processing may be applied to acquire data related to the procedure. For example, the type of surgical instruments used, and the image feed of the surgical procedure may be mined to add data to the report. UI 160 provides a “send-to-report” menu-item allowing the surgeon upload selected snapshots and pre-op images to the report.). Charron discloses wherein said surgical tool is enabled to overlay a synthetic visual dotted line starting from an end of said surgical tool and extending a specified distance in a direction that said selected surgical tool is pointed (para. 49 & 31 and see Fig. 6 of Charron; the second surgical instrument in FIGS. 1 and 2 has overlaid on its shaft 109 a measurement of the distance of the instrument to the surface of the patient, which changes as the surgeon 701 manipulates the instrument. This may be, for example, the minimum distance from the distal end (i.e. the end of the tool furthest from the surgeon's hands, which is normally an end of the tip) to the surface of the patient's body.). Nygaard Espeland discloses wherein the user interface overlay is generated with latency low enough to maintain spatial alignment with the surgical tool during live manipulation (para. 96-98 & 86-88 of Nygaard Espeland; the latency is addressed by the video compositor overlaying the results of the processing over the most recent frames of the endoscopic video feed. This leads to a few frames misalignment between the processing and the endoscopic video feed, which is usually acceptable but may lead to mismatches if there is very fast movement in the endoscopic video feed. The video compositor buffers the endoscopic video feed until the processing system 440 has processed the received sequence of frames with the steps a) to d) before overlaying the results of said processing over the endoscopic video feed. This may lead to a latency of more than 100 ms, e.g. 150 ms or 200 ms, but ensures that the overlay matches in a situation where there is very fast movement in the endoscopic video.). Ingle discloses a cloud capture server on a cloud network to collect surgical images on an enterprise scale to enable hospitals to ingest, manage, and fully utilize patient surgical video within a secure hospital network and to share said patient surgical video with designated users and provide a live stream of said patient surgical video from the surgery, and wherein said live stream supplies key clips for evaluation of hospital resources stored in said database for integration with patient electronic health records (para. 5, 8, 30, 33, 35, 41, 82, and 88-90 of Ingle; a method and a surgical visualization and recording system for allowing a user, for example, a surgeon to record 4K UHD resolution images directly to a storage device, for example, a flash drive, a hard drive, or a network drive on a secure hospital network to preclude unauthorized staff from handling the captured images along with the patient information and to maintain confidentiality of the patient information under the Health Insurance Portability and Accountability Act (HIPAA). Furthermore, there is a need for a method and a surgical visualization and recording system for automatically and securely transmitting the captured images of the surgical site for direct and secure storage on an external system and/or in a cloud computing environment over a network, for example, an internal hospital network in real time. The embedded microcomputer 222 saves 1116 the recorded video in the selected patient folder in the storage device 234 and/or the removable drive 218. When the live image 1004 is displayed on the tactile user interface 217 with up to a 4K UHD resolution, the user may click on the image snap button 804 to capture an image of a surgical site being streamed or click on the video record button 803 to record a video of the surgical site being streamed.). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Haider, McKinnon, Segev, Charron, Nygaard Espeland, and Ingle within Samadani’s surgical navigation system. The motivation for doing so would have been to provide feedback during a procedure to improve either the efficiency or quality, or both (abstract of Haider), so that data can be readily extracted (para. 150 of McKinnon), to provide the surgeon with a broad range of functionalities, and to allow the surgeon to interface naturally with the system and control the various features, allowing the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev), to assist a surgeon by presenting virtually augmented views of the portion of a patient being operated on and surgical instruments being used to perform the surgery (para. 1 of Charron), for real-time detection (para. 96 of Nygaard Espeland), and to reduce paperwork required to associate the captured images to a patient (para. 7 of Ingle). (C) Referring to claims 7 & 16, Samadani and Haider do not disclose for hospital use for risk mitigation by way of assessing a risk of postoperative complication. McKinnon discloses for hospital use for risk mitigation by way of assessing a risk of postoperative complication (para. 124 & 297 of McKinnon). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of McKinnon within Samadani and Haider. The motivation for doing so would have been to enable optimization of performance (para. 297 of McKinnon). (D) Referring to claims 8 and 17, Samadani, Haider, and McKinnon do not disclose further comprising said surgeon utilizing a plurality of commands, and wherein at least one of said plurality of commands include a voice command, a foot pedal command, and a physical gesture command. Segev discloses further comprising said surgeon utilizing a plurality of commands, and wherein at least one of said plurality of commands include a voice command, a foot pedal command, and a physical gesture command (para. 44, 108, 178, & 257 of Segev). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Segev within Samadani, Haider, and McKinnon. The motivation for doing so would have been to allow the surgeon to devote the bulk of his attention as well as his hands to the surgical procedure (para. 91 of Segev). Claim(s) 4 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samadani et al. (WO 2018/195529 A1) in view Haider et al. (US 2014/0107471 A1), in view of McKinnon et al. (US 2020/0275976 A1), in view of Segev et al. (US 2021/0382559 A1), in view of Charron et al. (US 2018/0228555 A1), in view of Nygaard Espeland et al. (US 2022/0296081 A1), and further in view of Yamada et al. (US 2021/0026464 A1). (A) Referring to claims 4 and 13, Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland do not disclose wherein a cloud capture network utilizes preferences set for said virtual pointing device. Yamada discloses wherein a cloud capture network utilizes preferences set for said virtual pointing device (para. 145 & 148 of Yamada). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Yamada within Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland. The motivation for doing so would have been to enhance user experience (para. 148 of Yamada). Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samadani et al. (WO 2018/195529 A1) in view Haider et al. (US 2014/0107471 A1), in view of McKinnon et al. (US 2020/0275976 A1), in view of Segev et al. (US 2021/0382559 A1), in view of Charron et al. (US 2018/0228555 A1), in view of Nygaard Espeland et al. (US 2022/0296081 A1), and further in view of Walle-Jensen et al. (US 2017/0124768 A1). (A) Referring to claim 9, Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland do not expressly disclose wherein said video overlay identifies an anatomical region of interest during surgery by applying image analysis to the surgical video, wherein the image analysis comprises at least one of video enhancement, tissue analysis, ICG quantification, or polyp detection, and wherein the identified anatomical region is visually distinguished from surrounding anatomical regions in the video overlay. Walle-Jensen discloses wherein said video overlay identifies an anatomical region of interest during surgery by applying image analysis to the surgical video, wherein the image analysis comprises at least one of video enhancement, tissue analysis, ICG quantification, or polyp detection, and wherein the identified anatomical region is visually distinguished from surrounding anatomical regions in the video overlay (Fig. 3, para. 4, 6-8, 104, 105, 142 of Walle-Jensen). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Walle-Jensen within Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland. The motivation for doing so would have been to aid in comparative assessments (para. 4 of Walle-Jensen). (B) Referring to claim 18, Haider, McKinnon, Segev, Charron, and Nygaard Espeland do not expressly disclose further comprising applying image analysis to the surgical video during a procedure to identify an anatomical region of interest, wherein the image analysis includes at least one of video enhancement, tissue analysis, ICG quantification, or polyp detection, and generating a video overlay that visually distinguishes the identified anatomical region from surrounding anatomical regions. Walle-Jensen discloses applying image analysis to the surgical video during a procedure to identify an anatomical region of interest, wherein the image analysis includes at least one of video enhancement, tissue analysis, ICG quantification, or polyp detection, and generating a video overlay that visually distinguishes the identified anatomical region from surrounding anatomical regions (Fig. 3, para. 4, 6-8, 104, 105, 142 of Walle-Jensen). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Walle-Jensen within Samadani, Haider, McKinnon, Segev, Charron, and Nygaard Espeland. The motivation for doing so would have been to aid in comparative assessments (para. 4 of Walle-Jensen). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samadani et al. (WO 2018/195529 A1) in view Haider et al. (US 2014/0107471 A1), in view of McKinnon et al. (US 2020/0275976 A1), in view of Segev et al. (US 2021/0382559 A1), in view of Charron et al. (US 2018/0228555 A1), in view of Nygaard Espeland et al. (US 2022/0296081 A1), in view of Ingle (US 2019/0238791 A1), and further in view of Walle-Jensen et al. (US 2017/0124768 A1). (A) Referring to claim 20, Samadani, Haider, McKinnon, Segeve, Charron, Nygaard Espeland, and Ingle do not expressly disclose wherein said user interface overlay presents an identified anatomical region of interest during surgery, the anatomical region being identified by applying image analysis to the surgical video using at least one of tissue analysis, ICG quantification, polyp detection, or video enhancement, and wherein the identified anatomical region is visually distinguished in the overlay from surrounding anatomy. Walle-Jensen discloses wherein said user interface overlay presents an identified anatomical region of interest during surgery, the anatomical region being identified by applying image analysis to the surgical video using at least one of tissue analysis, ICG quantification, polyp detection, or video enhancement, and wherein the identified anatomical region is visually distinguished in the overlay from surrounding anatomy (Fig. 3, para. 4, 6-8, 104, 105, 142 of Walle-Jensen). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Walle-Jensen within Samadani, Haider, McKinnon, Segeve, Charron, Nygaard Espeland, and Ingle. The motivation for doing so would have been to aid in comparative assessments (para. 4 of Walle-Jensen). Response to Arguments Applicant’s arguments with respect to claim(s) 1, 10, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's additional arguments filed 1/14/26 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 1/14/26. (1) Applicant argues that claims 1, 10, and 19, as amended, are not obvious over the cited references. (A) As per the first argument, regarding the newly added limitations of “wherein the user interface overlay is confined to image regions corresponding to the surgical tool, selectively presenting previously captured image data associated with anatomical regions obscured by the surgical tool,” see paragraphs 229 and 134 of Segev which disclose: “The virtual tool appears in the display as a 3D tool that the users see as an overlay on the live image via the HMD. This is based on 3D models of tools that are available to the user to choose from via the menu/GUI in this mode. UI 160 allows storing snapshots and videos and recording audio for subsequent use. The audio recordings may be converted to text to use as notes, operational summaries, naming and tagging files. The keywords tags may be subsequently applied by a deep learning algorithm. In some embodiments, system 100 supports an automatic generation of a surgical report, based on pre-determined templates having placeholders for pre-op data, snapshots from the live image, voice-to-text of recorded notes / summaries. The automatic report generation supports adding identifying data, such as the name of patient 122, name of surgeon 120, date, time and duration of the surgical procedure, and the like. Where suitable, machine learning and image processing may be applied to acquire data related to the procedure. For example, the type of surgical instruments used, and the image feed of the surgical procedure may be mined to add data to the report. UI 160 provides a “send-to-report” menu-item allowing the surgeon upload selected snapshots and pre-op images to the report.” As such, it is unclear how the language of the claims differ from the applied prior art. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, the motivations to combine came directly from the references. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). In addition, Applicant is reminded the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., selectively presenting previously captured image data to reconstruct anatomy obscured by the surgical tool, occlusion-aware overlay rendering, and maintaining spatial alignment under low-latency conditions during live tool manipulation) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENA NAJARIAN whose telephone number is (571)272-7072. The examiner can normally be reached Monday - Friday 9:30 am-6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LENA NAJARIAN/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

May 19, 2021
Application Filed
Jun 08, 2023
Non-Final Rejection — §103, §112
Nov 14, 2023
Response Filed
Feb 13, 2024
Final Rejection — §103, §112
Jul 20, 2024
Request for Continued Examination
Jul 24, 2024
Response after Non-Final Action
Sep 15, 2024
Non-Final Rejection — §103, §112
Jan 20, 2025
Response Filed
Apr 09, 2025
Final Rejection — §103, §112
Jul 14, 2025
Request for Continued Examination
Jul 17, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §103, §112
Jan 14, 2026
Response Filed
Apr 07, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573489
INFUSION PUMP LINE CONFIRMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12562247
PATIENT DATA MANAGEMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12542208
ALERT NOTIFICATION DEVICE OF DENTAL PROCESSING MACHINE, ALERT NOTIFICATION SYSTEM, AND NON-TRANSITORY RECORDING MEDIUM STORING COMPUTER PROGRAM FOR ALERT NOTIFICATION
2y 5m to grant Granted Feb 03, 2026
Patent 12488880
Discovering Context-Specific Serial Health Trajectories
2y 5m to grant Granted Dec 02, 2025
Patent 12488894
SYSTEM AND METHODS FOR MACHINE LEARNING DRIVEN CONTOURING CARDIAC ULTRASOUND DATA
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
38%
Grant Probability
78%
With Interview (+39.3%)
5y 0m
Median Time to Grant
High
PTA Risk
Based on 464 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month