DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is responsive to the amendment filed 19 December 2025. As per the amendment: claims 1-2, 11-12, and 18 have been amended, claim 20 has been added, and no claims have been cancelled. Thus, claims 1-20 are currently pending and under examination.
Response to Arguments
Response to Arguments Regarding 35 USC § 112
Applicant’s amendments to claim have overcome the 112(b) rejection previously set forth in the Non-Final Office action mailed 27 June 2025 for claims 1-19.
Response to Arguments Regarding 35 USC § 102/103
Applicant’s arguments, see pg. 7-9, filed 19 December 2025, with respect to the rejection(s) of claim(s) 1-4 and 10-11 under 35 U.S.C. 102(a)(2) under Buch et al. (US 2021/0307841 A1), hereinafter Buch have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chow et al. (US Patent 10,758,309 B1), hereinafter Chow.
Applicant has amended independent claims 1 and 11 to recite the limitation of “automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, an augmentation of the series of visual images, a detected change in a behavior of the tissue element, and a detected change in a behavior of another tissue element in the series of visual images” and further argues that Buch and Manzke (used previously for Claim 12) fail to teach this limitation. Examiner agrees and has instead used Chow to teach the newly added limitation and the limitations of claim 12.
Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the method comprises automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element (Column 1, lines 47-58: “The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)”, Column 2, lines 39-47: “In some implementations, the control (or facilitated control) of a surgical tool may be automatic”, Column 4, lines 23-33: “For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.”, Column 4, lines 34-61)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the method further comprise of automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14).
Therefore, claims 1-4, 10-12, and 20 are now rejected under 35 USC 103 (Buch in view of Chow).
No additional specific arguments were presented for previously set forth 35 U.S.C. 103 rejections of dependent claims 5-9 and 13-19, nor specifically with respect to the previously cited references: Kimball, Kersting, Knopp, Hoffman, Panescu, and Lightcap.
Therefore, claims 509 and 13-19 remain rejected as described in detail below.
Information Disclosure Statement
The information disclosure statements (IDS) were submitted on 23 September 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 10-12, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch et al. (US 2021/0307841 A1, previously cited), hereinafter Buch in view of Chow et al. (US Patent 10,758,309 B1), hereinafter Chow.
Regarding claim 1, Buch discloses a method of operating a surgical system (Abstract: “A method for generating and providing artificial intelligence assisted surgical guidance”), the method comprising:
receiving a series of visual images from an imaging system of a surgical field (Claim 21: “receiving, by the neural network, a live feed of video images from a surgery”);
extracting a plurality of regions of interest in the surgical field using information provided by an artificial intelligence (AI) model based on the series of visual images (Claim 21: “classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video image”);
identifying a surgical tool in a first region of interest of the plurality of regions of interest ([0042] “A region proposal network (RPN) receives image data from the backbone and outputs proposed regions of interest to inform object segmentation. The mask heads create proposed labels for anatomical objects in the image to be classified. The RCNN produces a surgical image with instance segmentations around classified…surgical instruments.”);
identifying a tissue element in a second region of interest of the plurality of regions of interest ([0042] “A region proposal network (RPN) receives image data from the backbone and outputs proposed regions of interest to inform object segmentation. The mask heads create proposed labels for anatomical objects in the image to be classified. The RCNN produces a surgical image with instance segmentations around classified anatomical structures....”);
Buch fails to explicitly disclose automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, an augmentation of the series of visual images, a detected change in a behavior of the tissue element, an augmentation of the series of visual images, a detected change in a behavior of the tissue element, and a detected change in a behavior of another tissue element in the series of visual images.
However, Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the method comprises automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element (Column 1, lines 47-58: “The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)”, Column 2, lines 39-47: “In some implementations, the control (or facilitated control) of a surgical tool may be automatic”, Column 4, lines 23-33: “For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.”, Column 4, lines 34-61)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the method further comprise of automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14).
Regarding claim 2, Buch in view of Chow teaches the method of claim 1 (as shown above). Buch further discloses the method further comprising: providing feedback to human operator of the system ([0032] “neural network 100 may be trained to identify a pointer instrument having a pointer end that when brought in close proximity to a tissue type in the live video feed triggers surgical guidance generator 104 to generate output identifying the tissue type”, [0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking and feedback for surgeons regarding normal stepwise procedure flow (chronological relationships) and upcoming “hidden” objects (positional relationships)…These post-processed outputs of the surgical guidance generator will be clinically tailored, surgery- and even surgeon-specific including information such as a surgical roadmap of suggested next steps, movement efficiency metrics, complication avoidance warnings”, [0033]) wherein the feedback comprises: an augmentation of the series of visual images ([0033], [0005] “outputting, in real time, by audio and video-overlay means, tailored surgical guidance based on the classified at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images.”, [0034] “surgical guidance generator 104 may process data from multiple input sources, including pre-operative imaging, patient-specific risk factors, and intraoperative vital signs and output such data superimposed on the live video feed of the surgery to provide surgical guidance to the surgeon.”).
Regarding claim 3, Buch in view of Chow teaches the method of claim 2 (as shown above). Buch further discloses wherein the augmentation comprises a visual label identifying the surgical tool, a label identifying the tissue element, or a combination thereof ([0044] “FIGS. 6A and 6B are computer screen shots of surgical video frames at different times illustrating segmentation of anatomical objects and instruments using the architecture illustrated in FIG. 4. The textual labels in FIGS. 6A and 6B were added manually for illustrative purposes but could have been added by the network illustrated in FIG. 4. In FIG. 6A, the video frame shows an artery, the spinal cord, and an instrument, labeled INSTRUMENT1. The artery, spinal cord and INSTRUMENT1 are segmented by the network illustrated in FIG. 4, and that segmentation is depicted in the video frames as transparent color of different colors on top of each anatomical object or image.”).
Regarding claim 4, Buch in view of Chow teaches the method of claim 2 (as shown above). Buch further discloses wherein the augmentation comprises a proximity warning indicating that the surgical tool is too close to the tissue element ([0028] “FIG. 1B is a diagram illustrating a hierarchy of functions performed by the surgical guidance generator illustrated in FIG. 1A. At the lowest level of the hierarchy, anatomical and surgical objects are identified. At the next level, movement of the anatomical and surgical objects is tracked. At this level critical structure proximity warnings along with procedural next-step suggestions may be generated”, [0032] “neural network 100 may be trained to identify a pointer instrument having a pointer end that when brought in close proximity to a tissue type in the live video feed triggers surgical guidance generator 104 to generate output identifying the tissue type.”)
Regarding claim 10, Buch in view of Chow teaches the method of claim 1 (as shown above). Buch further discloses wherein the series of visual images is received and processed in real time ([0015] “the subject matter described herein includes a technology that can assist surgeons in real-time to recognize objects in their operative environment, notify them of situations that may require additional caution prior to executing an action, and assist surgeons in crucial decision-making situations.”, [0033], [0025] “This wide-reaching application of real-time, automated decision-making augmentation, surgical roadmaps, complication avoidance warnings, procedural stepwise cost estimates, predictive analytics, and tailored surgical guidance using artificial intelligence for surgeons is entirely novel; and our innovative AOA could redefine the forefront of cutting-edge surgical care”, [0020], [0005] “The method subsequently includes receiving, by the neural network, a live feed of video images from the surgery. The method further includes classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulation in the live feed of video images. The method further includes outputting, in real time, by audio and video-overlay means, tailored surgical guidance based on the classified at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images”)
Regarding claim 11, Buch discloses a system for performing an ophthalmic surgical procedure ([0002] “the subject matter described herein relates to methods, systems, and computer readable media for generating and providing artificial intelligence assisted surgical guidance.”, [0022] shows potential usage in cataract surgery), the system comprising:
a surgical tool ([0005] “surgical objects (i.e. instruments and items)”, [0032] “pointer instrument”, [0039] “surgical instrument”) ;
a computer processor ([0006] “processor of a computer”, Claim 11: “the system comprising: at least one processor”);
a display device coupled to the computer processor ([0035] “Surgical guidance generator 104, in one example, may output the surgical guidance on a video screen that surgeons use during image guided surgery.”);
an imaging system coupled to the computer processor (Claim 11: “a neural network implemented by the at least one processor…the neural network being configured to receive, a live feed of video images”, [0018] “a video feed of the operative field can be obtained with either an overhead operating room (OR) camera, microscope, or endoscope”);
a memory device, coupled to the computer processor, storing instructions executable by the computer processor ([0006] “a non-transitory computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps”) to operate an artificial intelligence (Al) model (Claim 21: neural network) configured to receive a series of visual images of a surgical field from the imaging system (Claim 21: “receiving, by the neural network, a live feed of video images from a surgery”);
wherein the instructions further cause the computer processor, the Al model, or a combination thereof to:
extract a plurality of regions of interest in the surgical field using information provided by an artificial intelligence (Al) model based on the series of visual images (Claim 21: “classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video image”);
identify the surgical tool in a first region of interest of the plurality of regions of interest ([0042] “A region proposal network (RPN) receives image data from the backbone and outputs proposed regions of interest to inform object segmentation. The mask heads create proposed labels for anatomical objects in the image to be classified. The RCNN produces a surgical image with instance segmentations around classified…surgical instruments.”);
identify a tissue element in a second region of interest of the plurality of regions of interest ([0042] “A region proposal network (RPN) receives image data from the backbone and outputs proposed regions of interest to inform object segmentation. The mask heads create proposed labels for anatomical objects in the image to be classified. The RCNN produces a surgical image with instance segmentations around classified anatomical structures....”)
Buch fails to explicitly disclose automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, an augmentation of the series of visual images, a detected change in a behavior of the tissue element, an augmentation of the series of visual images, a detected change in a behavior of the tissue element, and a detected change in a behavior of another tissue element in the series of visual images.
However, Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the method comprises automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element (Column 1, lines 47-58: “The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)”, Column 2, lines 39-47: “In some implementations, the control (or facilitated control) of a surgical tool may be automatic”, Column 4, lines 23-33: “For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.”, Column 4, lines 34-61)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the method further comprise of automatically adjusting an operating parameter of the surgical tool based on the AI model and one or more items selected from a group consisting of: the tissue element, the relative placement of the surgical tool and the tissue element, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14).
Regarding claim 12, Buch in view Chow teaches the system of claim 11 (as shown above). Buch fails to explicitly disclose wherein the surgical tool is communicatively coupled to the computer processor and configured to operate automatically based upon a signal received from the computer processor.
However, Chow discloses wherein the surgical tool is communicatively coupled to the computer processor (view Figures 1 and 4, Column 14, lines 12-14: “Surgical tool controller 180 may include one or more devices configured to transmit the command signals directly to each surgical tool”), and configured to operate automatically based upon a signal received from the computer processor (Column 14, lines 14-27: “For instance, the one or more devices of surgical tool controller 180 may be physically attached to each individual surgical tool. When surgical tool controller 180 receives a command signal, surgical tool controller 180 may communicate with the one or more devices physically attached to the surgical tool to control the surgical tool in accordance with the received command. As a non-limiting example, a blocking device may be operable to physically block a laparoscopic diathermy energy device from supplying energy (e.g., by blocking or temporarily creating an open circuit like a switch and/or closing the open circuit to supply energy), or in the case of regulating control, the blocking device may be a regulator configured to incrementally control an amount of energy supplied.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the surgical tool is communicatively coupled to the computer processor and configured to operate automatically based upon a signal received from the computer processor, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14).
Regarding claim 20, Buch in view of Chow teaches the system of claim 11 (as shown above). Buch further discloses the instructions further causing the computer processor, the AI model, or a combination thereof to provide feedback to a human operator of the system based on the relative placement of the surgical tool and the tissue element ([0032] “neural network 100 may be trained to identify a pointer instrument having a pointer end that when brought in close proximity to a tissue type in the live video feed triggers surgical guidance generator 104 to generate output identifying the tissue type”, [0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking and feedback for surgeons regarding normal stepwise procedure flow (chronological relationships) and upcoming “hidden” objects (positional relationships)…These post-processed outputs of the surgical guidance generator will be clinically tailored, surgery- and even surgeon-specific including information such as a surgical roadmap of suggested next steps, movement efficiency metrics, complication avoidance warnings”, [0033]).
Alternatively, Chow discloses the instructions further causing the computer processor, the AI model, or a combination thereof to provide feedback to a human operator of the system based on the relative placement of the surgical tool and the tissue element (Column 10, lines 28-49: “Haptic feedback can be provided by certain laparoscopic tools to notify surgeons regarding attributes of the material being operated on by an end of the laparoscopic tool. In some implementations, the haptic feedback signals can be combined with the image or video data to facilitate control of surgical tools. As a non-limiting example, computer-assisted surgical system 100 may recognize an “avoidance zone” within a patient's stomach with a confidence of 60%. Computer-assisted surgical system 100 may analyze related haptic feedback signals (being received from the laparoscopic tool or any other tool) to assist in the determination of whether or not the video feed is showing an “avoidance zone” within the camera's field of view. The haptic feedback signal may provide a certain haptic signal detectable by the surgeon when the laparoscopic tool is touching tissue that may indicate a likelihood of being near an “avoidance zone.” The present disclosure is not limited to haptic feedback signals. The one or more data streams received from the real-time data collection system 145 may include digital data (e.g., video data) and/or analogue data (e.g., a signal representing a patient's heart rate).”, Column 13, lines 14-55).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the instructions further causing the computer processor, the AI model, or a combination thereof to provide feedback to a human operator of the system based on the relative placement of the surgical tool and the tissue element, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14).
Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 2 and 11 above, and further in view of Kimball et al. (US 2022/0331054 A1, previously cited), hereinafter Kimball.
Regarding claim 5, Buch in view of Chow teaches the method of claim 2 (as shown above). Buch and Chow, alone or in combination, fail to teach wherein the augmentation comprises an indication that the surgical tool is misplaced.
However, Kimball teaches an augmented reality display system used during surgical procedures (Abstract) wherein the augmentation comprises an indicated that the surgical tool is misplaced ([0161] “FIG. 11 is an augmented image 100 of a live feed of a surgical area 118 as visualized through a laparoscopic camera during a minimally invasive surgical procedure indicating appropriate tissue 112 captured between jaws 110 of a surgical instrument end effector 108…The augmented image 100 shows a virtual graphical alert overlay 106 superimposed on the anvil 110 of the end effector 108… A first superimposed alert 104 informs that the tissue 112 grasped with the jaws 110 at the proximal end of the end effector 108 is out of range. A second superimposed alert 116 informs that the tissue 112 grasped with the jaws 110 at the medial portion of the end effector 108 is within reload range. A third superimposed alert 114 informs that the tissue 112 grasped with the jaws at the distal end of the end effector 108 is over the cut line.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Kimball to have the augmentation comprise an indication that the surgical tool is misplaced, as these prior art references and the instant application are directed to real-time augmented feed used in surgical procedures. One would be motivated to do this to alert the surgeon or user of incompatible usage (i.e. out of range or misplaced) allowing the surgeon to correctly place the tool, as recognized by Kimball ([0159]).
Regarding claim 19, Buch in view of Chow teaches the system of claim 11 (as shown above). Buch further discloses wherein the augmentation comprises one or more items selected from a group consisting of: a visual label identifying the surgical tool, a label identifying the tissue element, or a combination thereof ([0044] “FIGS. 6A and 6B are computer screen shots of surgical video frames at different times illustrating segmentation of anatomical objects and instruments using the architecture illustrated in FIG. 4. The textual labels in FIGS. 6A and 6B were added manually for illustrative purposes but could have been added by the network illustrated in FIG. 4. In FIG. 6A, the video frame shows an artery, the spinal cord, and an instrument, labeled INSTRUMENT1. The artery, spinal cord and INSTRUMENT1 are segmented by the network illustrated in FIG. 4, and that segmentation is depicted in the video frames as transparent color of different colors on top of each anatomical object or image.”); a proximity warning indicating that the surgical tool is too close to the tissue element ([0022] “The AOA will also be able to caution surgeons if they are approaching structures designated as “do not manipulate,” such as spinal cord during intradural spine surgery or the posterior capsule of the lens in cataract surgery.”,[0028] “At this level critical structure proximity warnings along with procedural next-step suggestions may be generated”).
Buch and Chow, alone or in combination, fail to teach wherein the augmentation comprises an indication that the surgical tool is misplaced.
However, Kimball teaches an augmented reality display system used during surgical procedures (Abstract) wherein the augmentation comprises an indicated that the surgical tool is misplaced ([0161] “FIG. 11 is an augmented image 100 of a live feed of a surgical area 118 as visualized through a laparoscopic camera during a minimally invasive surgical procedure indicating appropriate tissue 112 captured between jaws 110 of a surgical instrument end effector 108…The augmented image 100 shows a virtual graphical alert overlay 106 superimposed on the anvil 110 of the end effector 108… A first superimposed alert 104 informs that the tissue 112 grasped with the jaws 110 at the proximal end of the end effector 108 is out of range. A second superimposed alert 116 informs that the tissue 112 grasped with the jaws 110 at the medial portion of the end effector 108 is within reload range. A third superimposed alert 114 informs that the tissue 112 grasped with the jaws at the distal end of the end effector 108 is over the cut line.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Kimball to have the augmentation comprise an indication that the surgical tool is misplaced, as these prior art references and the instant application are directed to real-time augmented feed used in surgical procedures. One would be motivated to do this to alert the surgeon or user of incompatible usage (i.e. out of range or misplaced) allowing the surgeon to correctly place the tool, as recognized by Kimball ([0159]).
Claim(s) 6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 2 above, and further in view of Kersting (US Patent 8,903,145 B2, previously cited), hereinafter Kersting.
Regarding claim 6, Buch in view of Chow teaches the method of claim 2 (as shown above). Buch further discloses wherein the augmentation comprises a template overlay on the series of images (Claim 9: “outputting the surgical guidance includes overlaying the surgical guidance on the live feed of video images or onto a surgical field using augmented reality”, Figure 3C), the template indicating one or more placements of the surgical tool to perform a surgical procedure ([0044] “the instrument may be overlaid with a purple transparent overlay. FIG. 6B illustrates a subsequent video frame and how the segmentation performed by the network illustrated in FIG. 4 tracks movement of the anatomical objects and instruments. In FIG. 6B, the artery has been moved to a new position by a second instrument, labeled INSTRUMENT2. Thus, the trained RCNN illustrated in FIG. 4 segments anatomical objects and surgical instruments in a live video feed from surgery and tracks movements of the instruments and anatomical objects over time.”).
Alternatively, Kersting teaches image processing for computer-aided eye-surgery to help a surgeon when performing the eye surgery wherein the augmentation comprises a template overlay on the series of images, the template indicating one or more placements of the surgical tool to perform a surgical procedure (Column 7, lines 28-32: “The result of the "surgery planner", which actually consists of a computer which executes a suitable graphical processing to produce the enhanced reference image, is then the reference image or diagnostic image which is enriched by context information.”, Column 7, lines 42-46: “The thus enriched reference image then is inputted into a processing unit (which may again be a PC, according to one embodiment even the PC which also forms the surgery planner) which performs registration of the reference image with the real time image of the eye (live image)”, Column 8, lines 41-46: “Using registration and tracking for IOL surgery, an intermediate surgery planning step can be introduced, where a doctor is planning--after receiving the diagnostic data and before the surgery--the best fitting incisions for the patient. The incisions can be tagged and labelled as visual context information on the diagnostic image.”, Column 8, lines 26-30: “For every IOL surgery the doctor has to place multiple cuts (=incisions) in the eye to bring in surgery tools under the cornea for e.g., removing the existing lens, inducting the folded IOL, positioning the IOL, inducing and removing temporary fluids.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Kersting to have wherein the augmentation comprises a template overlay on the series of images, the template indicating one or more placements of the surgical tool to perform a surgical procedure, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to speed up and simplify surgical procedure while ensuring accuracy and safety, as recognized by Kersting (Column 5, lines 4-15).
Regarding claim 8, Buch in view of Chow in view of Kersting teaches the method of claim 6 (as shown above). Buch and Chow, alone or in combination, fail to teach wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue.
However, Kersting teaches image processing for computer-aided eye-surgery to help a surgeon when performing the eye surgery wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue (Column 8, lines 41-46: “Using registration and tracking for IOL surgery, an intermediate surgery planning step can be introduced, where a doctor is planning--after receiving the diagnostic data and before the surgery--the best fitting incisions for the patient. The incisions can be tagged and labelled as visual context information on the diagnostic image.”, Figure 4).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Kersting to have wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to speed up and simplify surgical procedure while ensuring accuracy and safety, as recognized by Kersting (Column 5, lines 4-15).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow in view of Kersting as applied to claim 6 above, and further in view of Knopp et al. (WO 93/16631 A1, previously cited), hereinafter Knopp.
Regarding claim 7, Buch in view of Chow in view of Kersting teaches the method of claim 6 (as shown above). Buch, Chow, and Kersting, alone or in combination, fail to teach wherein the template comprises a visual indication of regions for application of a laser treatment.
However, Knopp teaches a method, apparatus, and system for template-controlled, precision laser interventions is described for microsurgery (Abstract) wherein the template comprises a visual indication of regions for application of a laser treatment (pg.3, lines 4-26: “For corneal refractive surgery, the above nine considerations reduce to the following objectives (in accordance with the present invention described below): (1) identify the location on or in the cornea to be treated, (2) assure that the target is at the desired distance from the apparatus, determine the topography of the cornea, and determine the location of sensitive tissues to be avoided…(4) provide a laser beam which can be focused onto the precise locations designated by the user such that peripheral damage is limited to within tolerable levels both surrounding the target site and along the laser beam path anterior and posterior to the target site, (5) provide a user interface wherein the user can either draw, adjust, or designate particular template patterns overlaid on a live video image of the cornea and provide the means for converting the template pattern into a sequence of automatic motion instructions which will traverse the laser beam to focus sequentially on a number of points in three dimensional space which will in turn replicate the designated template pattern into the corresponding surgical intervention”, pg. 35, lines 15-20: “the surgeon may control the firing of the laser with “templates which can be superimposed over an image of the tissue being operated upon, and which enable an automatic tracing of a desired laser firing pattern based upon prior experience or a surgeon’s insights with similar surgical procedures. The templates may be pre-programmed or generated anew for each patient, as the case requires”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch, Chow, and Kersting to incorporate the teachings of Knopp to have the template comprises a visual indication of regions for application of a laser treatment, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to facilitate precision laser interventions, as recognized by Knopp (pg. 2, lines 2-3).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 2 above, and further in view of Hoffman et al. (US 2018/0028054 A1, previously cited), hereinafter Hoffman.
Regarding claim 9, Buch in view of Chow teaches the method of claim 2 (as shown above). Buch and Chow, alone or in combination, fail to explicitly teach wherein the augmentation comprises a clarification of the focus of the image.
However, Hoffman teaches a method and system for controlling a telesurgical tool in an image-guided surgical system wherein the augmentation comprises a clarification of the focus of the image (Abstract, [0010]) wherein the augmentation comprises a clarification of the focus of the image ([0045] “Automatic camera following may be combined together with a digital zoom in some embodiments of the invention such that the digital zoomed portion of an image tracks or follow a surgeon's motions, such as the gaze of his pupils, without requiring mechanical movement of the endoscopic camera. If the surgeon's motions indicate that the digital zoomed portion extend beyond pixels of the high definition digital image being captured, the endoscopic camera may be mechanically moved or panned automatically.”, [0046] “different sensing modalities may be used to detect a surgeon's motion so that a digital zoomed portion of interest of an image may be moved around within the pixels of a high definition digital image. Some different sensing modalities include (1) robotic surgical tool tracking, (2) surgeon gaze tracking; (3) or a discrete user interface.”, Figure 5A and Figure 6A).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Hoffman to have the augmentation comprises a clarification of the focus of the image, as these prior art references are directed to image-guided surgical procedures. One would be motivated to do this to improve the surgeon’s vision of the surgical site, as recognized by Hoffman ([0102]).
Claim(s) 13-14 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 11 above in view of Knopp et al. (WO 93/16631 A1), hereinafter Knopp.
Regarding claim 13, Buch in view of Chow teaches the system of claim 11 (as shown above). Buch and Chow, alone or in combination, fails to teach the system further comprising a template library storing a plurality of surgical templates, each surgical template providing augmentation data for one or more surgical procedures .
However, Knopp teaches a method, apparatus, and system for template-controlled, precision laser interventions is described for microsurgery (Abstract) wherein the system further comprising a template library storing a plurality of surgical templates (“a library of patterns is available so that the computer can generate templates based on the optical correction prescribed (generated off-line by the physician's "refraction" of the patient) and the measured topography (which templates will automatically correct for edge effects, based on built-in expert-system computational capability)… A physician may therefore choose to select from a set of pre¬ existing templates containing his preferred prescriptions, lay the template, in effect, on the computer-generated image of the region, and re- size and/or re-scale the template to match the particular patient/eye characteristics”, Claim 82: “the template means includes means for enabling selection of a template from a library of stored preprogrammed templates”), each surgical template providing augmentation data for one or more surgical procedures (pg. 25, lines 4-7: “templates can also be generated and stored in similar manner for procedures other than corneal refractive surgery, including iridotomy, posterior capsulotomy, trabeculoplasty, keratotomy, and others.”, Claim 21: “replicate the designated template pattern onto the surgery site”)
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Knopp to have the system further comprise of a template library storing a plurality of surgical templates, each surgical template providing augmentation data for one or more surgical procedures, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to facilitate precise laser intervention and to limit peripheral damage, as recognized by Knopp (pg. 2, lines 2-3; pg. 3, lines 16-17) .
Regarding claim 14, Buch in view of Chow in view of Knopp teaches the system of claim 13 (as shown above). Buch and Chow, alone or in combination, fail to teach wherein the augmentation comprises an overlay on the series of images defined by a surgical template selected from the plurality of surgical templates.
However, Knopp further teaches wherein the augmentation comprises an overlay on the series of images defined by a surgical template selected from the plurality of surgical templates (pg. 35, lines 15-20: “the surgeon may control the firing of the laser with “templates which can be superimposed over an image of the tissue being operated upon, and which enable an automatic tracing of a desired laser firing pattern based upon prior experience or a surgeon’s insights with similar surgical procedures. The templates may be pre-programmed or generated anew for each patient, as the case requires”, pg. 3, lines 19-26: “(5) provide a user interface wherein the user can either draw, adjust, or designate particular template patterns overlaid on a live video image of the cornea and provide the means for converting the template pattern into a sequence of automatic motion instructions which will traverse the laser beam to focus sequentially on a number of points in three dimensional space which will in turn replicate the designated template pattern into the corresponding surgical intervention”) .
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Knopp to have the augmentation comprises an overlay on the series of images defined by a surgical template selected from the plurality of surgical templates, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to facilitate precise laser intervention and to limit peripheral damage, as recognized by Knopp (pg. 2, lines 2-3; pg. 3, lines 16-17) .
Regarding claim 16, Buch in view of Chow in view of Knopp teaches the system of claim 14 (as shown above). Buch and Chow, alone or in combination, fail to teach wherein the template comprises a visual indication of regions for application of a laser treatment.
However, Knopp teaches wherein the template comprises a visual indication of regions for application of a laser treatment (pg.3, lines 4-26: “For corneal refractive surgery, the above nine considerations reduce to the following objectives (in accordance with the present invention described below): (1) identify the location on or in the cornea to be treated, (2) assure that the target is at the desired distance from the apparatus, determine the topography of the cornea, and determine the location of sensitive tissues to be avoided…(4) provide a laser beam which can be focused onto the precise locations designated by the user such that peripheral damage is limited to within tolerable levels both surrounding the target site and along the laser beam path anterior and posterior to the target site, (5) provide a user interface wherein the user can either draw, adjust, or designate particular template patterns overlaid on a live video image of the cornea and provide the means for converting the template pattern into a sequence of automatic motion instructions which will traverse the laser beam to focus sequentially on a number of points in three dimensional space which will in turn replicate the designated template pattern into the corresponding surgical intervention”, pg. 35, lines 15-20: “the surgeon may control the firing of the laser with “templates which can be superimposed over an image of the tissue being operated upon, and which enable an automatic tracing of a desired laser firing pattern based upon prior experience or a surgeon’s insights with similar surgical procedures. The templates may be pre-programmed or generated anew for each patient, as the case requires”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Knopp to have the template comprises a visual indication of regions for application of a laser treatment, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to facilitate precision laser interventions, as recognized by Knopp (pg. 2, lines 2-3).
Claim(s) 15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow in view of Knopp as applied to claims 14 and 16 above, and further in view of Kersting.
Regarding claim 15, Buch in view of Chow in view of Knopp teaches the system of claim 14 (as shown above). Buch, Chow, and Knopp, alone or in combination fail to explicitly teach wherein the surgical template indicated one or more placements of the surgical tool to perform a surgical procedure.
However, Kersting teaches image processing for computer-aided eye-surgery to help a surgeon when performing the eye surgery wherein the surgical template indicates one or more placements of the surgical tool to perform a surgical procedure (Column 7, lines 28-32: “The result of the "surgery planner", which actually consists of a computer which executes a suitable graphical processing to produce the enhanced reference image, is then the reference image or diagnostic image which is enriched by context information.”, Column 7, lines 42-46: “The thus enriched reference image then is inputted into a processing unit (which may again be a PC, according to one embodiment even the PC which also forms the surgery planner) which performs registration of the reference image with the real time image of the eye (live image)”, Column 8, lines 41-46: “Using registration and tracking for IOL surgery, an intermediate surgery planning step can be introduced, where a doctor is planning--after receiving the diagnostic data and before the surgery--the best fitting incisions for the patient. The incisions can be tagged and labelled as visual context information on the diagnostic image.”, Column 8, lines 26-30: “For every IOL surgery the doctor has to place multiple cuts (=incisions) in the eye to bring in surgery tools under the cornea for e.g., removing the existing lens, inducting the folded IOL, positioning the IOL, inducing and removing temporary fluids.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch, Chow, and Knopp to incorporate the teachings of Kersting to have wherein the augmentation comprises a template overlay on the series of images, the template indicating one or more placements of the surgical tool to perform a surgical procedure, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to speed up and simplify surgical procedure while ensuring accuracy and safety, as recognized by Kersting (Column 5, lines 4-15).
Regarding claim 17, Buch in view of Chow in view of Knopp teaches the system of claim 16 (as shown above). Buch, Chow, and Knopp, alone or in combination, fail to teach wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue.
However, Kersting teaches image processing for computer-aided eye-surgery to help a surgeon when performing the eye surgery wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue (Column 8, lines 41-46: “Using registration and tracking for IOL surgery, an intermediate surgery planning step can be introduced, where a doctor is planning--after receiving the diagnostic data and before the surgery--the best fitting incisions for the patient. The incisions can be tagged and labelled as visual context information on the diagnostic image.”, Figure 4).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch, Chow and Knopp to incorporate the teachings of Kersting to have wherein the template comprises a visual indication of one or more suitable places for an incision or surgical removal of a portion of tissue, as these prior art references and the instant application are directed to image-guided surgical procedures. One would be motivated to do this to speed up and simplify surgical procedure while ensuring accuracy and safety, as recognized by Kersting (Column 5, lines 4-15).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 11 above, and further in view of Panescu et al. (US 2017/0181808 A1, previously cited), hereinafter Panescu.
Regarding claim 18, Buch in view of Chow teaches the system of claim 11 (as shown above). Buch and Chow, alone or in combination, fail to explicitly teach the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the computer processor.
However, Panescu teaches a system to provide haptic feedback during a medical procedure wherein the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the computer processor ([0122] “In response to a determination that user input is received to enter a proximity threshold, module 2014 configures the computer to use Q3D information to monitor proximity between two or more objects within the surgeon's field of view. Decision module 2016 determines whether the proximity threshold has been crossed. In response to a determination that the proximity threshold has been crossed, module 2018 configures the computer to activate an alarm. The alarm may include…other haptic feedback”, [0139], [0153] “the operating surgeon may be provided with haptic feedback that vibrates or provides manipulating resistance to the control input device 160 of FIG. 5. The amount of vibration or manipulating resistance would be modulated according to the magnitude of tissue displacement u, or tissue force f, as provided by the Q3D endoscopic system 101C of FIG. 8.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Panescu to have the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the processor, as these prior art references and the instant application are directed to assisting a surgeon during a surgery. One would be motivated to do this to alarm the surgeon that a proximity threshold has been crossed, as recognized by Panescu ([0122]).
Alternatively, claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claim 11 above, and further in view of Lightcap et al. (US 11,202,676 B2, previously cited), hereinafter Lightcap.
Regarding claim 18, Buch in view of Chow teaches the system of claim 11 (as shown above). Buch and Chow, alone or in combination, fail to explicitly teach the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the computer processor.
However, Lightcap teaches a computer-implemented method for controlling a surgical system with a processor receiving a signal indicative of a distance between a surgical tool and an patient’s anatomy (Column 2, line 67-Column 3, line 5) wherein the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the processor (Column 8, lines 29-39: “processor 231 may receive the signals indicating the distance between distal end 211 and spinal cord 103, and, based on these signals, may generate and send one or more commands to robotic arm 204 such that a user operating articulating arm 206 or surgical tool 210 of robotic aim 204 experiences haptic feedback based on the distance between distal end 211 and spinal cord 103, as determined by neural monitor 280. In certain embodiments, the user may experience haptic feedback such that robotic aim 204 becomes more difficult to move as distal end 211 moves closer to spinal cord 103.”, Column 4, lines 61-66: “The force system and controller are configured to provide control or guidance to the surgeon during manipulation of the surgical tool. The force system is configured to provide at least some force to the surgical tool via articulated arm 206, and the controller is programmed to generate control signals for controlling the force system.”).
It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch and Chow to incorporate the teachings of Lightcap to have the surgical tool further comprising a haptic feedback mechanism, wherein the surgical tool is configured to actuate the haptic feedback mechanism responsive to a signal from the processor, as these prior art references and the instant application are directed to guiding a surgeon during a medical procedure. One would be motivated to do this to prevent the surgeon from undesired interactions with a patient’s anatomy and prevent the surgeon from improperly performing surgery, as recognized by Lightcap (Column 2, lines 1-7).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATTIYA SAYYADA HUSSAINI whose telephone number is (703)756-5921. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Niketa Patel can be reached at 5712724156. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ATTIYA SAYYADA HUSSAINI/Examiner, Art Unit 3792
/NIKETA PATEL/Supervisory Patent Examiner, Art Unit 3792