DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 21-40 are examined below. Claims 1-20 are cancelled.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/6/2024, 3/27/2024, 1/23/2026, 2/5/2026 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wolf et al (US 2021/0012868).
Claim 21:
Wolf et al (US 2021/0012868) anticipated the following subject matter:
An inference device connected between a surgical robot and a console controlling the surgical robot, the inference device comprising:
one or more processors; and a storage storing instructions causing any of the one or more processors to execute processing of (figure 14 and 0290 teaches of structure of processor, storage/database/memory, non-transitory computer readable medium to contain and execute instruction):
acquiring an operative field image shot by an imaging unit of the surgical robot (0285 teaches surgical event with image frames, where 0537,0538 and 0595 detail the use of surgical robot with camera for capturing and transmitting images, video footage over network directly wired and/or wirelessly; 0090 teaches camera for moving to object with field of view and function such as adjust zoom, direction, speed);
performing an inference process on the acquired operative field image (0285 detail inferred base on set of images from surgical area (operative field image)); and
transmitting at least one of the acquired operative field image and information based on an inference result to the console according to transmission settings by the console (0285-0286, specifically 0286 teaches additional surgeon assistance and/or guidance is requested base on set of frames (field image) that are inferred, where figure 14 and 0290 teaches transmission to 1410 (paragraph 0309-0311)).
Claim 22:
The inference device according claim 21, wherein instructions causing the any of the one or more processors to execute processing of:
inferring at least one of a position of an object to be recognized in the operative field image and an event occurring in the operative field image (0086 detail region of interest of camera (field image) where event such as bleeding of particular location or structure as well as object such as surgical tool, hand or surgeon, tracking location with control of camera).
Claim 23:
The inference device according claim 22, wherein instructions causing the any of the one or more processors to execute processing of:
generating image data indicating the position of the object; and transmitting the image data using one-way communication with the console (0086 detail identify position/location of object (surgical tool, hand of surgeon, bleeding, motion, anatomical structure), where human operator viewing the ROI (region of interest) which is one way).
Claim 24:
The inference device according claim 23, wherein instructions causing the any of the one or more processors to execute processing of:
generating positional information indicating the position of the object; and transmitting the positional information using two-way communication with the console (0086 teaches position data, tracking of different object and ROI to human operator (one way); figure 14 teaches network 1418 with communication going both ways).
Claim 25:
The inference device according claim 22, wherein instructions causing the any of the one or more processors to execute processing of:
generating control information for controlling an operation of the surgical robot according to the inference result; and transmitting the control information generated by the control unit to the console (0086 teaches control of camera as well as infrared laser by operator; 0553 detail provide support for surgical procedures to user with recommendation and control commands to surgical robots).
Claim 26:
The inference device according claim 25, wherein instructions causing the any of the one or more processors to execute processing of:
generating the control information in order to move the imaging unit following a tip portion of the recognized surgical device (0086 detail identify position/location of object such as surgical tool).
Claim 27:
The inference device according to claim 25, wherein the instructions causing the any of the one or more processors to execute processing of:
computing an area of the recognized object; and generating the control information in order to move the imaging unit following a portion in which the computed area increases or decreases (0087 teaches camera to image ROI (area) to larger using camera 115 to zoom in or out (increase or decrease)).
Claim 28:
The inference device according to claim 25, wherein the instructions causing the any of the one or more processors to execute processing of:
generating the control information in order to move the imaging unit to a designated position on the object (0086 teaches control information to move camera to zoom in and out of object (surgical tool, hand of surgeon, bleeding, motion, anatomical structure)).
Claim 29:
The inference device according to claim 25, wherein the instructions causing the any of the one or more processors to execute processing of:
generating the control information in order to move the imaging unit according to a distance between the imaging unit and the object (0086 teaches control information to move camera to zoom in and out of object (surgical tool, hand of surgeon, bleeding, motion, anatomical structure), where 0091 detail distance consideration between object and camera).
Claim 30:
The inference device according to claim 25, wherein the instructions causing the any of the one or more processors to execute processing of:
inferring a motion or gesture of an operator operating the console; and generating control information in order to control an operation of the imaging unit or a surgical device according to an inference result (0089 detail control motions such as orientation, zoom of camera in given surgical procedure to track motion of surgeon hands).
Claim 31:
The inference device according to claim 25, wherein the instructions causing the any of the one or more processors to execute processing of:
computing an area or shape of the recognized object; and generating control information for selecting or controlling a surgical device to be used according to the area or shape of the object (0182 teaches consideration such as color, shape, structure and condition cause changes to surgical procedure (control of surgery, which above is the camera as well as surgical robot)).
Claim 32:
The inference device according to claim 21, wherein the instructions causing the any of the one or more processors to execute processing of:
computing a confidence of the inference result (0629 and 0630 teaches calculating confidence probability or score); and changing a resolution of the operative field image according to the computed confidence (0089 teaches camera setting such as resolution is controlled; 0124 teaches playing of resolution based on decision marker, where 0560 and 0579 detail decision marker is based on confident level).
Claim 33:
The inference device according to claim 21, wherein the instructions causing the any of the one or more processors to execute processing of:
computing a confidence of the inference result (0629 and 0630 teaches calculating confidence probability or score); acquiring information of the surgical robot from the console; and computing a score of a surgery performed by the surgical robot on the basis of the computed confidence and the acquired information (0579 teaches surgical robot with confidence level).
Claim 34:
The inference device according to claim 21, wherein the inference device according to wherein the imaging unit of the surgical robot is configured to output an operative field image for a left eye and an operative field image for a right eye, and wherein the instructions causing the any of the one or more processors to execute processing of:
acquiring the operative field image for the left eye and the operative field image for the right eye output from the imaging unit, and inferring each of the acquired operative field image for the left eye and the acquired operative field image for the right eye (0091 teaches stereo camera for ROI, where stereo image provide field of view each for left eye and right eye individually).
Claim 35:
The inference device according to claim 34, wherein the instructions causing the any of the one or more processors to execute processing of:
computing a confidence of the inference result; and outputting an alert on the basis of a difference between the confidence of the inference result computed for the operative field image for the left eye and the confidence of the inference result computed for the operative field image for the right eye (0637-0638 teaches image-related data structure with correlated from other data that change the probability of confidence would generated alertness to undertake other surgical actions, where above teaches use of stereo images (left and right eye images)).
Claim 36:
The inference device according to claim 34, wherein the instructions causing the any of the one or more processors to execute processing of:
computing a confidence of the inference result; generating control information for moving the imaging unit according to a difference between the confidence of the inference result computed for the operative field image for the left eye and the confidence of the inference result computed for the operative field image for the right eye; and transmitting the generated control information to the console (0637-0638 teaches image-related data structure with correlated from other data that change (difference) the probability of confidence would generated alertness to undertake other surgical actions (generated controls information), where above teaches use of stereo images (left and right eye images)).
Claim 37:
The inference device according to claim 34, wherein the instructions causing the any of the one or more processors to execute processing of:
computing depth information on the basis of the operative field image for the left eye and the operative field image for the right eye; and transmitting the computed depth information to the console (0691 teaches frames consideration to depth of incision during surgery, where neural network configure to identify more specific intraoperative in 0692, above teaches use of image/frame from stereo camera for left and right eye image).
Claim 38:
The inference device according to claim 34, wherein the instructions causing the any of the one or more processors to execute processing of:
computing depth information on the basis of the operative field image for the left eye and the operative field image for the right eye; generating control information for controlling an operation of the surgical robot according to the computed depth information, and transmitting the generated control information to the console (teaches frames consideration to depth of incision during surgery, where neural network configure to identify more specific intraoperative (generated control information) in 0692, above teaches use of image/frame from stereo camera for left and right eye image).
Claim 39:
Wolf et al (US 2021/0012868) anticipated the following subject matter:
An information processing method (figure 5) executed by a computer connected between a surgical robot and a console controlling the surgical robot, the information processing method comprising:
acquiring an operative field image shot by an imaging unit of the surgical robot (0285 teaches surgical event with image frames, where 0537,0538 and 0595 detail the use of surgical robot with camera for capturing and transmitting images, video footage over network directly wired and/or wirelessly; 0090 teaches camera for moving to object with field of view and function such as adjust zoom, direction, speed);
performing inference on the acquired operative field image (0285 detail inferred base on set of images from surgical area (operative field image)); and
transmitting at least one of the operative field image and information based on an inference result to the console according to transmission settings received through the console (0285-0286, specifically 0286 teaches additional surgeon assistance and/or guidance is requested base on set of frames (field image) that are inferred, where figure 14 and 0290 teaches transmission to 1410 (paragraph 0309-0311)).
Claim 40:
Wolf et al (US 2021/0012868) anticipated the following subject matter:
A non-transitory computer readable recording medium (figure 14 and 0290 teaches of structure of processor, storage/database/memory, non-transitory computer readable medium) storing a computer program causing a computer connected between a surgical robot and a console controlling the surgical robot to execute processing comprising:
acquiring an operative field image shot by an imaging unit of the surgical robot (0285 teaches surgical event with image frames, where 0537,0538 and 0595 detail the use of surgical robot with camera for capturing and transmitting images, video footage over network directly wired and/or wirelessly; 0090 teaches camera for moving to object with field of view and function such as adjust zoom, direction, speed);
performing inference on the acquired operative field image (0285 detail inferred base on set of images from surgical area (operative field image)); and
transmitting at least one of the operative field image and information based on an inference result to the console according to transmission settings received through the console (0285-0286, specifically 0286 teaches additional surgeon assistance and/or guidance is requested base on set of frames (field image) that are inferred, where figure 14 and 0290 teaches transmission to 1410 (paragraph 0309-0311)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shelton et al (2022/0233241) teaches SURGICAL PROCEDURE MONITORING - surgical computing system may receive usage data associated with movement of a surgical instrument and user inputs to the surgical instrument. The surgical computing system may receive motion and biomarker sensor data from sensing systems applied to the operator of the surgical instrument. The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument. The surgical computing system may determine, based on the evaluation, to provide feedback. The feedback may comprise instructions for the surgical instrument to provide haptic feedback and/or to modify its configuration. The feedback may comprise instructions for a display unit to present notifications instructing the healthcare professional.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656