DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-12, 15-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Patent Publication Number 20190090969A1 granted to Jarc et al. (hereinafter “Jarc” – previously published as WO2017083768A1. – both on IDS).
Regarding claims 1 and 11, Jarc discloses a system/method for robot-assisted surgery (e.g. Para 0043 “the system…for performing surgical procedure” Fig. 1, 5A, 20), comprising: an image sensor (e.g. para 0046 and 0049 “image capture device”); a display (e.g. Fig. 4, para 0048 “display”); and a controller coupled to the image sensor and the display (e.g. Fig. 4, para 0043 “processor”), wherein the controller (e.g. Fig. 4, para 0043 “processor”), wherein the controller includes logic (e.g. “algorithm”) that when executed by the controller causes the system to perform operations (e.g. para 0076-0078 “algorithm and pattern matching” please note that the claim does not recite any details regarding the logic and “when” it is executed by the controller in order to perform certain operations. The examiner understands the processor’s operation of pattern matching and analysis using the algorithm to performing the same functions), including: acquiring first images of a surgical procedure with the image sensor (e.g. para 0047 “capture images of a surgical site and output the captured images to a computer processor”); analyzing the first images with the controller to identify a surgical step in the surgical procedure (e.g. para 0048 “image processing” para 0110 “a parallel-running simulation may be autonomously analyzed to determine the current surgical stage using segmenter 1004,” and para 0137 “Technique advisor 1614 may recommend a particular surgical technique to be used in a given scenario, in response to the current or upcoming surgical stage”); and displaying second images on the display in response to identifying the surgical step (e.g. para 0137 “Technique advisor 1614 may recommend a particular surgical technique to be used in a given scenario, in response to the current or upcoming surgical stage”), wherein the second images include at least one of a diagram of human anatomy, a preoperative image, an intraoperative image, or an annotated image of one of the first images (e.g. para 0140 “loading of pre-op images for the current/upcoming step from current patient or related patients.).
Regarding claim 2, Jarc discloses the system of claim 1, further comprising: a plurality of arms coupled to the controller and configured to hold surgical instruments (e.g. Para .0049 “surgical procedures using one or more mechanical support arms 510”); and a tactile user interface coupled to the controller (e.g. Fig. 1-5a, para 0065 “graphic user interface overlaid onto the displayed video”), wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to receiving user input from the tactile user interface, manipulating the plurality of arms (e.g. para 0006 “The TSS generally includes a surgeon input interface that accepts surgical control input for effecting an electromechanical surgical system to carry out a surgical procedure”).
Regarding claim 3, Jarc discloses the system of claim 1, further comprising: a microphone coupled to the controller to send voice commands from a user to the controller; and a speaker coupled to the controller to output audio (e.g. Para 0065 “the graphic user interface can include a QWERTY keyboard, a pointing device such as a mouse and an interactive screen display, a touch-screen display, or other means for data or text entry or voice annotation/or speech to text conversion via a microphone and processor.”).
Regarding claim 5, Jarc discloses the system of claim 1, further comprising annotating the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed (e.g. para 0065-0066 “highlights or annotates certain patient anatomy shown in the displayed video using an input device of surgeon's console 52”).
Regarding claim 6, Jarc discloses the system of claim 1, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray (e.g. para 0108).
Regarding claim 7, Jarc discloses the system of claim 1, wherein the logic includes a machine learning algorithm trained to recognize surgical steps from the first images, and wherein identifying the surgical step in the surgical procedure from the first images includes using the machine learning algorithm (e.g. para 0094).
Regarding claim 8, Jarc discloses the system of claim 7, wherein the machine learning algorithm includes at least one of a convolutional neural network (CNN) or long short-term memory (LSTM) (e.g. para 0094 “convolutional neural networks”).
Regarding claim 9, Jarc discloses the system of claim 1, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: estimating a remaining duration of the surgical procedure, in response to identifying the surgical step (e.g. para 0116).
Regarding claim 10, Jarc discloses the system of claim 1, wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller (e.g. Fig. 5A, para 0042 “endoscope that includes a camera to view a surgical site within a patient's body”).
Regarding claim 12, Jarc discloses the method of claim 11, further comprising estimating a remaining duration of the surgical procedure, in response to identifying the surgical step (e.g. para 0116).
Regarding claim 15, Jarc discloses the method of claim 11, further comprising using the controller to annotate the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed (e.g. para 0065-0066 “highlights or annotates certain patient anatomy shown in the displayed video using an input device of surgeon's console 52”).
Regarding claim 16, Jarc discloses the method of claim 11, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray (e.g. para 0108).
Regarding claim 17, Jarc discloses the method of claim 11, wherein the controller includes a machine learning algorithm trained to recognize surgical steps from the first images, and wherein identifying a surgical step in the surgical procedure from the first images includes using the machine learning algorithm (e.g. para 0094).
Regarding claim 18, Jarc discloses the method of claim 17, wherein the machine learning algorithm includes at least one of a convolutional neural network (CNN) or long short-term memory (LSTM) (e.g. para 0094 “convolutional neural networks”).
Regarding claim 19, Jarc discloses the method of claim 11, wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller (e.g. Fig. 5A, para 0042 “endoscope that includes a camera to view a surgical site within a patient's body”).
Regarding claim 20, Jarc discloses the method of claim 11, further comprising: capturing voice commands with a microphone coupled to the controller; and in response to capturing the voice commands, displaying the preoperative image (e.g. Para 0065 “the graphic user interface can include a QWERTY keyboard, a pointing device such as a mouse and an interactive screen display, a touch-screen display, or other means for data or text entry or voice annotation/or speech to text conversion via a microphone and processor.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Jarc in view of US Patent Number 9788907B1 issued to Alvi et al. (hereinafter “Alvi” –IDS).
Regarding claim 4, Jarc discloses the system of claim 3, but fails to disclose wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to identifying the surgical step, outputting audio commands to a user of the system from the speaker.
Alvi teaches wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to identifying the surgical step, outputting audio commands to a user of the system from the speaker (e.g. Col. 29, lines 20-26 “the electronic data may include an audio signal (e.g., of spoken words) that is to be presented during a visual display of the part of the video data). It would have been obvious to one of ordinary skill in the art at the time to modify the disclosure of Jarc with the teachings of Alvi to include audio commands to a user to provide the predictable result of allowing the user to follow instructions without having to read or detect marks on the screen using auditory senses.
Regarding claim 13, Jarc discloses the method of claim 11, but fails to disclose further comprising outputting audio commands from a speaker coupled to the controller, in response to determining the surgical step.
Alvi teaches comprising outputting audio commands from a speaker coupled to the controller, in response to determining the surgical step (e.g. Col. 29, lines 20-26 “the electronic data may include an audio signal (e.g., of spoken words) that is to be presented during a visual display of the part of the video data). It would have been obvious to one of ordinary skill in the art at the time to modify the disclosure of Jarc with the teachings of Alvi to include audio commands to a user to provide the predictable result of allowing the user to follow instructions without having to read or detect marks on the screen using auditory senses.
Regarding claim 14, Jarc discloses the method of claim 13, discloses outputting the duration of the surgical procedure (e.g. para 0116) but fails to disclose outputting the duration of the surgical procedure from the speaker.
Alvi teaches outputting the duration of the surgical procedure from the speaker (e.g. Col. 29, lines 20-26 “the electronic data may include an audio signal (e.g., of spoken words) that is to be presented during a visual display of the part of the video data). It would have been obvious to one of ordinary skill in the art at the time to modify the disclosure of Jarc with the teachings of Alvi to include audio commands to a user to provide the predictable result of allowing the user to follow instructions without having to read or detect marks on the screen using auditory senses.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 8, 12, 13, 16-17 of U.S. Patent No. 12,102,397. Although the claims at issue are not identical, they are not patentably distinct from each other.
1. A system for robot-assisted surgery, comprising: an image sensor; a display; and a controller coupled to the image sensor and the display, wherein the controller includes logic that when executed by the controller causes the system to perform operations, including: acquiring first images of a surgical procedure with the image sensor; analyzing the first images with the controller to identify a surgical step in the surgical procedure; and displaying second images on the display in response to identifying the surgical step, wherein the second images include at least one of a diagram of human anatomy, a preoperative image, an intraoperative image, or an annotated image of one of the first images.
1. A system for surgery, comprising: a controller including logic that when executed by the controller causes the system to perform operations, including: receiving first images of a surgical procedure; analyzing the first images with the controller [] and selecting second images related to the surgical step for display in response to the identifying the surgical step, wherein the second images include at least one of a diagram of human anatomy relevant to the surgical step, a preoperative image relevant to the surgical step, an intraoperative image relevant to the surgical step, or an annotated image of one of the first images.
2. The system of claim 1, further comprising: a plurality of arms coupled to the controller and configured to hold surgical instruments; and a tactile user interface coupled to the controller, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to receiving user input from the tactile user interface, manipulating the plurality of arms.
2. The system of claim 1, further comprising: a plurality of arms coupled to the controller and configured to hold surgical instruments; and a tactile user interface coupled to the controller, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to receiving user input from the tactile user interface, manipulating the plurality of arms.
3. The system of claim 1, further comprising: a microphone coupled to the controller to send voice commands from a user to the controller; and a speaker coupled to the controller to output audio.
3. The system of claim 1, further comprising: a microphone coupled to the controller to send voice commands from a user to the controller; and a speaker coupled to the controller to output audio.
4. The system of claim 3, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to identifying the surgical step, outputting audio commands to a user of the system from the speaker.
4. The system of claim 3, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: in response to identifying the surgical step, outputting audio commands to the user of the system from the speaker.
5. The system of claim 1, further comprising annotating the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed.
16. The method of claim 11, further comprising annotating the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed.
6. The system of claim 1, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray.
17. The method of claim 11, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray.
7. The system of claim 1, wherein the logic includes a machine learning algorithm trained to recognize surgical steps from the first images, and wherein identifying the surgical step in the surgical procedure from the first images includes using the machine learning algorithm.
12. The method of claim 11, further comprising estimating a remaining duration of the surgical procedure using the machine learning architecture, in response to the identifying the surgical step.
8. The system of claim 7, wherein the machine learning algorithm includes at least one of a convolutional neural network (CNN) or long short-term memory (LSTM).
from claim 1 - analyzing the first images with the controller executing a machine learning model to identify one or more surgical steps in the surgical procedure, wherein an architecture of the machine learning model includes a convolutional neural network (CNN) directed to identifying the one or more surgical steps
9. The system of claim 1, wherein the controller further includes logic that when executed by the controller causes the system to perform operations, including: estimating a remaining duration of the surgical procedure, in response to identifying the surgical step.
13. The method of claim 12, further comprising: automatically informing, by the controller, an operating room scheduler when an estimation of the remaining duration by the machine learning architecture changes.
10. The system of claim 1, wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller.
8. The system of claim 1, further comprising an image sensor to capture the first images, and wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller.
11. A method for operating a surgical robot, comprising: capturing first images of a surgical procedure with an image sensor; identifying, in the first images, a surgical step in the surgical procedure using a controller, wherein the controller is coupled to the image sensor to receive the first images; and in response to determining the surgical step, displaying second images on a display coupled to the controller, wherein the second images include at least one of a diagram of human anatomy, a preoperative image, an intraoperative image, or an annotated image of one of the first images.
1. A system for surgery, comprising: a controller including logic that when executed by the controller causes the system to perform operations, including: receiving first images of a surgical procedure; analyzing the first images with the controller [] and selecting second images related to the surgical step for display in response to the identifying the surgical step, wherein the second images include at least one of a diagram of human anatomy relevant to the surgical step, a preoperative image relevant to the surgical step, an intraoperative image relevant to the surgical step, or an annotated image of one of the first images.
12. The method of claim 11, further comprising estimating a remaining duration of the surgical procedure, in response to identifying the surgical step.
13. The method of claim 12, further comprising: automatically informing, by the controller, an operating room scheduler when an estimation of the remaining duration by the machine learning architecture changes.
13. The method of claim 11, further comprising outputting audio commands from a speaker coupled to the controller, in response to determining the surgical step.
14. The method of claim 11, further comprising outputting audio commands from a speaker coupled to the controller, in response to determining the surgical step.
14. The method of claim 13, further comprising outputting the duration of the surgical procedure from the speaker.
15. The method of claim 14, further comprising outputting a duration of the surgical procedure from the speaker.
15. The method of claim 11, further comprising using the controller to annotate the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed.
16. The method of claim 11, further comprising annotating the one of the first images to form the annotated image by at least one of highlighting a piece of anatomy, highlighting the location of a surgical step, or highlighting where a surgical instrument should be placed.
16. The method of claim 11, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray.
17. The method of claim 11, wherein the preoperative image includes a magnetic resonance image, computerized tomography scan, or an X-ray.
17. The method of claim 11, wherein the controller includes a machine learning algorithm trained to recognize surgical steps from the first images, and wherein identifying a surgical step in the surgical procedure from the first images includes using the machine learning algorithm.
claims 1, 11, executing a machine learning model
18. The method of claim 17, wherein the machine learning algorithm includes at least one of a convolutional neural network (CNN) or long short-term memory (LSTM).
from claim 1 or 11 - analyzing the first images with the controller executing a machine learning model to identify one or more surgical steps in the surgical procedure, wherein an architecture of the machine learning model includes a convolutional neural networ
19. The method of claim 11, wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller.
8. The system of claim 1, further comprising an image sensor to capture the first images, and wherein the image sensor is disposed in an endoscope and the endoscope is coupled to the controller.
20. The method of claim 11, further comprising: capturing voice commands with a microphone coupled to the controller; and in response to capturing the voice commands, displaying the preoperative image.
3. The system of claim 1, further comprising: a microphone coupled to the controller to send voice commands from a user to the controller; and a speaker coupled to the controller to output audio.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANA SAHAND whose telephone number is (571)272-6842. The examiner can normally be reached on M-Th 8:30 am -5:30 pm; F 9 am-3 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer McDonald can be reached on (571) 270-3061. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANA SAHAND/Examiner, Art Unit 3796