Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shelton IV et al. (US 20220104910 A1).
Re claim 1, Shelton discloses a system for image navigation using on-demand deep learning based segmentation, the system comprising:
an exoscope configured to capture image data from a field of view (Shelton: paragraph [0007], Scopes include, but are not limited to, arthroscopes, angioscopes, bronchoscopes, choledochoscopes, colonoscopes, cystoscopes, duodenoscopes, enteroscopes, esophagogastro-duodenoscopes (gastroscopes), endoscopes, laryngoscopes, nasopharyngo-neproscopes, sigmoidoscopes, thoracoscopes, ureteroscopes, and exoscopes);
an image segmentation module (Shelton: paragraphs [0279]-[0286]);
an intent recognition module to capture a user's intent (Shelton: paragraph [0472], the surgeon may give instruct the personal interface 3406 to adjust the livestream or select a destination for the livestream using one or more of a gesture, a hand motion, a voice command, a head motion, and the like);
one or more robotic arms configured to move the exoscope (Shelton: paragraph [0508]);
a processor (Shelton: paragraphs [0008]-[0029], The surgical hub and/or medical instrument may comprise a memory and a processor); and
memory storing computer-executable instructions (Shelton: paragraphs [0008]-[0029], The surgical hub and/or medical instrument may comprise a memory and a processor)
that, when executed by the processor, causes the system to:
receive, via the exoscope, image data relating to an image or a video stream of a surgical site (Shelton: paragraphs [0279]-[0286]);
generate, via the image segmentation module, an augmented image comprising a plurality of labeled regions overlaying the surgical site (Shelton: paragraph [0274], overlaying or augmenting images and/or text from multiple image/text sources to present composite images on one or more displays);
receive, via the intent recognition module, a voice command selecting a labeled region of the plurality of labeled regions (Shelton: paragraph [0472]); and
cause, via the one or more robotic arms, a movement of the exoscope so that the selected labeled region is within the field of view of the exoscope after the movement (Shelton: paragraph [0508], the control scheme allows for a clinician to reposition a robotic arm).
Re claim 2, Shelton discloses that the plurality of labeled regions comprises one or both of:
a plurality of color coded regions; or
a plurality of textually labeled regions (Shelton: paragraph [0274], overlaying or augmenting images and/or text from multiple image/text sources to present composite images on one or more displays).
Re claim 3, Shelton discloses that the one or more robotic arms include at least one of a pneumatic arm and a hydraulic arm (Shelton: paragraph [0508], the control scheme allows for a clinician to reposition a robotic arm).
Re claim 4, Shelton discloses an instrument control module configured to receive a command signal caused by a tapping gesture on a surgical instrument while the surgical instrument is pointed towards a labeled region of the plurality of labeled region (Shelton: Fig. 23B; paragraphs [0060] and [0337]).
Re claim 5, Shelton discloses that the instructions, when executed by the processor, further cause the system to:
receive, via the instrument control module, an instrument command selecting a second labeled region of the plurality of labeled regions (Shelton: paragraph [0472]); and
cause, via the one or more robotic arms, a second movement of the exoscope so that the second selected labeled region is within the field of view of the exoscope (Shelton: paragraph [0508], the control scheme allows for a clinician to reposition a robotic arm).
Re claim 6, Shelton discloses the instructions, when executed by the processor, cause the system to generate the augmented image by: segmenting, using a deep learning model, the image of the surgical site into a plurality of regions (Shelton: paragraph [0237], Using pattern recognition or machine learning techniques, for example, the situational awareness system can be trained to recognize the positioning of the medical imaging device according to the visualization of the patient's anatomy).
Re claim 7, Shelton discloses a display configured to output the augmented image (Shelton: paragraph [0274]).
Re claim 8, Shelton discloses the instructions, when executed by the processor, further cause the system to: train the deep learning model using training data comprising a plurality of reference image data having a plurality of recognized regions associated with the reference image data (Shelton: paragraph [0237], Using pattern recognition or machine learning techniques, for example, the situational awareness system can be trained to recognize the positioning of the medical imaging device according to the visualization of the patient's anatomy).
Re claim 9, Shelton discloses the instructions, when executed by the processor, cause the system to: employ deep learning methods to associate each pixel of the image with a corresponding anatomical structure (Shelton: paragraph [0237], Using pattern recognition or machine learning techniques, for example, the situational awareness system can be trained to recognize the positioning of the medical imaging device according to the visualization of the patient's anatomy).
Re claim 10, Shelton discloses that the instructions, when executed by the processor, cause the system to segment, using the deep learning model, the image of the surgical site into the plurality of regions by at least one of:
clustering regions of the image of the surgical site based at least upon one of threshold intensity values for pixels;
using seed points of the image for growing regions based on similarity criteria; and
applying edge detection, watershed segmentation, or active contour detection (Shelton: paragraph [0765], In an aspect, the physical characteristic may be determined by processing the captured images to detect the edges of the objects in the images and comparing the detected images to a template of the body part being evaluated).
Claim 11 recites the corresponding method for implementation by the system of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 11. Accordingly, claim 11 has been analyzed and rejected with respect to claim 1 above.
Re claim 12, Shelton discloses that the user input is one or both of:
a voice command selecting a labeled region of the plurality of labeled regions (Shelton: paragraphs [0023]-[0026], voice command; paragraph [0299], voice activation); or
a tapping gesture on a surgical instrument while the surgical instrument is pointed towards a labeled region of the plurality of labeled regions (Shelton: Fig. 23B; paragraphs [0060] and [0337]).
Re claim 13, Shelton disclose, prior to generating the augmented image: receiving a first user input to initiate segmentation, wherein the augmented image is generated responsive to the first user input (Shelton: paragraph [0472]).
Claim 14 has been analyzed and rejected with respect to claim 6 above.
Claim 15 has been analyzed and rejected with respect to claim 7 above.
Claim 16 has been analyzed and rejected with respect to claim 8 above.
Claim 17 has been analyzed and rejected with respect to claim 9 above.
Claim 18 has been analyzed and rejected with respect to claim 10 above.
Claim 19 has been analyzed and rejected with respect to claim 2 above.
Claim 20 has been analyzed and rejected with respect to claim 3 above.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER G FINDLEY whose telephone number is (571)270-1199. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571)272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER G FINDLEY/Primary Examiner, Art Unit 2482