Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of the status of the application as a continuation of 17/368,747, filed 06 July 2021.
Double Patenting
3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of 1-6 U.S. Patent No. 11969218. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the patent encompass the boarder claims of the instant application as indicated in the table below.
Claim #
Instant Application
Claim #
Patent
1
A method for providing feedback to guide setup of a surgical robotic system, the method comprising the steps of:
1
A method for providing feedback to guide setup of a surgical robotic system, the method comprising the steps of:
Receiving an image of a medical procedure site in which at least one first subject comprising a first robotic manipulator, and at least one second subject comprising at least one of a second robotic manipulator, patient table, patient, and bedside staff are located;
Receiving an image of a medical procedure site in which at least one first subject comprising a first robotic manipulator, and at least one second subject comprising at least one of a second robotic manipulator, patient table, patient, and bedside staff are located;
Receiving procedure-related input comprising a surgical procedure type;
Receiving, from a user input device, procedure-related input comprising a user designation of a type of surgical procedure to be performed;
Displaying the image in real time on an image display;
Displaying the image in real time on an image display;
Using computer vision to recognize at least one or the first subject and the second subject in the image and to determine the relative positions of the first subject and the second subject;
Using computer vision to recognize at least one of the first subject and the second subject in the image and to determine the relative positions of the first subject and the second subject;
Displaying, as an overlay to the displayed image, a graphical indication of a target position of at least one of the first subject and the second subject within the medical procedure site, the target position determined based on the procedure-related input.
Displaying, as an overlay to the displayed image, a graphical indication of a target position of at least one of the first subject and the second subject within the medical procedure site, the target position determined based on the procedure-related input, and
Displaying, as an overlay to the displayed image, a graphical indication of a trocar target position within the medical procedure site, wherein the trocar target position comprises a recommended position for a trocar to be used to receive a surgical instrument carried by the first robotic manipulator, and is determined based on the procedure-related input, wherein said graphical indication is displayed before a user positions the trocar through patient tissue.
2
The method of claim 1,
2
The method of claim 1,
Wherein the method includes retrieving preferred relative positions of the first subject and the second subject from a database based on the procedure-related input and determining the target position based on the preferred relative positions.
Wherein the method includes retrieving preferred relative positions of the first subject and the second subject from a database based on the procedure-related input and determining the target position based on the preferred relative positions.
3
The method of claim 1,
3
The method of claim 1,
Wherein the determining step determines the relative 3D positions of the first subject and the second subject.
Wherein the determining step determines the relative 3D positions of the first subject and the second subject.
System claims 4-6 correspond to method claims 1-3 in both the patent and instant application.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Fuerst (US 20200054412).
Regarding claim 1, Fuerst teaches a method for providing feedback to guide setup of a surgical robotic system (Paragraph 10, augmented live video for guiding the arm setup, and stream the augmented live video to a display to visually guide the user through arm setup to the target pose), the method comprising the steps of:
Receiving an image of a medical procedure site in which at least one first subject comprising a first robotic manipulator, and at least one second subject comprising at least one of a second robotic manipulator, patient table, patient, and bedside staff are located (Paragraph 27, a robotic surgical system 100 for assisting in the setup of a surgical robotic arm or arms 110 in a surgical environment using a display 104 can include a camera 140, configured to capture video of a user 106 setting up the robotic arm 110 mounted on a surgical table 114 in a real surgical environment; Paragraph 23, A “real surgical environment” as used herein describes the local physical space of the setup, which can include robotic arms, mounts, surgical tables, lamps, trays, medical personnel, the room and objects in the room);
Receiving procedure-related input comprising a surgical procedure type (Paragraph 31, data structures to correlate different poses of robotic arms to different surgical procedures, tables, and patients);
Displaying the image in real time on an image display (Paragraph 35, display the augmented live video on display);
Using computer vision to recognize at least one or the first subject and the second subject in the image and to determine the relative positions of the first subject and the second subject (Paragraph 33, The AR processor can also include a computer vision module 152, configured to perform scene reconstruction, event detection, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, image restoration, and other known computer vision techniques based on the captured video. It can apply the 3D vision techniques described above to analyze the captured video and identify the surgical robot arms, the surgical table, the patient or dummy, the medical personnel, other objects, and orientations and positions of such, in the surgical environment. Alternatively, or additionally, the processor can identify and/or track objects (for example, the robotic arms and the surgical table) with 2D markers (for example, labels or stickers placed on the objects));
Displaying, as an overlay to the displayed image, a graphical indication of a target position of at least one of the first subject and the second subject within the medical procedure site, the target position determined based on the procedure-related input (Paragraph 53, Referring now to FIG. 7 (showing features that can be additional to those of FIG. 6), the AR processor can also be also be configured to determine 710 whether a pose of a real surgical robotic arm is at the target pose of the virtual surgical robotic arm. The processor can trigger 720 an indication when the pose of the real surgical robotic arm is at, or overlaid upon, the virtual robotic arm).
System claim 4 corresponds to method claim 1. Therefore, claim 4 is rejected for the same reasons as used above.
Regarding claim 2, Fuerst teaches the method of claim 1, wherein the method includes retrieving preferred relative positions of the first subject and the second subject from a database based on the procedure-related input and determining the target position based on the preferred relative positions (Paragraph 24, Referring now to FIG. 7 (showing features that can be additional to those of FIG. 6), the AR processor can also be also be configured to determine 710 whether a pose of a real surgical robotic arm is at the target pose of the virtual surgical robotic arm. The processor can trigger 720 an indication when the pose of the real surgical robotic arm is at, or overlaid upon, the virtual robotic arm).
System claim 5 corresponds to method claim 2. Therefore, claim 5 is rejected for the same reasons as used above.
Regarding claim 3, Fuerst teaches the method of claim 1, wherein the determining step determines the relative 3D positions of the first subject and the second subject (Paragraph 25, The “virtual robotic arm” can be a complex 3D model of the real robotic arm. Alternatively, or additionally, a “virtual robotic arm” can include visual aids such as arrows, tool tips, or other representation relating to providing pose information about a robotic arm such as a geometrically simplified version of the real robotic arm).
System claim 6 corresponds to method claim 3. Therefore, claim 6 is rejected for the same reasons as used above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Penny (US 20200188044) in view of Lang (US 20220079675).
Regarding claim 1, Penny teaches a method for providing feedback to guide setup of a surgical robotic system, the method comprising the steps of:
Displaying the image in real time on an image display (Paragraph 1, The image captured by the camera is shown on a display at the surgeon console. The console may be located patient-side, within the sterile field, or outside of the sterile field);
Using computer vision to recognize at least one or the first subject and the second subject in the image and to determine the relative positions of the first subject and the second subject (Paragraph 14, the computer vision algorithm to analyze endoscopic image data, 3D endoscopic image data or structured light system image data to detect shape characteristics of the stomach as shaped by the bougie. The algorithm is used to determine the location of the bougie based on topographical variations in the imaged region or, if the bougie is illuminated, light variations. The system can generate an overlay on the image display identifying the location of the bougie or a margin of predetermined distance from the detected longitudinal edge of the bougie. The surgeon can the guide the stapler to a target cut/staple pathway based on the region defined by the bougie.);
Displaying, as an overlay to the displayed image, a graphical indication of a target position of at least one of the first subject and the second subject within the medical procedure site, the target position determined based on the procedure-related input (Paragraph 14, The system can generate an overlay on the image display identifying the location of the bougie or a margin of predetermined distance from the detected longitudinal edge of the bougie. The surgeon can the guide the stapler to a target cut/staple pathway based on the region defined by the bougie.).
While Penny fails to disclose the following, Lang teaches:
Receiving an image of a medical procedure site in which at least one first subject comprising a first robotic manipulator, and at least one second subject comprising at least one of a second robotic manipulator, patient table, patient, and bedside staff are located (Paragraph 83, detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof);
Receiving procedure-related input comprising a surgical procedure type (Paragraph 421, Barcodes and QR codes are machine readable optical labels that can include information, for example, about the patient including patient identifiers, patient condition, type of surgery);
Lang and Penny are both considered to be analogous to the claimed invention because they are in the same field of surgical imaging. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Penny to incorporate the teachings of Lang and receive an input image of a medical procedure site comprising a robotic manipulator and at least a second subject and receive a surgical procedure type. Doing so would allow for more context when imaging the surgical procedure and tailor the displayed output to the given procedure type.
System claim 4 corresponds to method claim 1. Therefore, claim 4 is rejected for the same reasons as used above.
Regarding claim 2, the combination of Penny and Lang teaches the method of claim 1, wherein the method includes retrieving preferred relative positions of the first subject and the second subject from a database (Lang, Paragraph 425, bar codes and QR codes included in or part of one or more optical markers can be predefined and, optionally, stored in database accessible by an image and/or video capture system) based on the procedure-related input and determining the target position based on the preferred relative positions (Penny, Paragraph 14, identifying the location of the bougie or a margin of predetermined distance from the detected longitudinal edge of the bougie).
Lang and Penny are both considered to be analogous to the claimed invention because they are in the same field of surgical imaging. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Penny to incorporate the teachings of Lang and use a database to store the predetermined relative positions of the first subject and second subject. Doing so would allow for efficiently storing and accessing the necessary information to perform various medical procedures.
System claim 5 corresponds to method claim 2. Therefore, claim 5 is rejected for the same reasons as used above.
Regarding claim 3, the combination of Penny and Lang teaches the method of claim 1, wherein the determining step determines the relative 3D positions of the first subject and the second subject (Penny, Paragraph 14, A controller executes the computer vision algorithm to analyze endoscopic image data, 3D endoscopic image data or structured light system image data to detect shape characteristics of the stomach as shaped by the bougie).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SNIGDHA SINHA whose telephone number is (571)272-6618. The examiner can normally be reached Mon-Fri. 12pm-8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SNIGDHA SINHA/Examiner, Art Unit 2619
/JASON CHAN/Supervisory Patent Examiner, Art Unit 2619