Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the amendments dated September 2, 2025.
Claims 1-9 and 12-23 are pending.
Claim Objections
Claims 2-13, 15-16, and 19 are objected to because of the following informalities: the phrase “wherein the one or more processors configured to execute the instructions to is further configured to” appears to contain an extraneous “to” after “instructions”, such that this term should read “wherein the one or more processors configured to execute the instructions [[to]] is further configured to”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 12-14, and 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al. (US PGPUB 2019/0008595 – “Popovic”) in view of Tominaga (US PGPUB 2016/017763 – “Tominaga”).
Regarding Claim 1, Popovic discloses:
A system (Popovic FIG. 2, control unit 106) for controlling a medical image capture device (Popovic FIG. 1, image acquisition device 112) during surgery, the system including: a storage medium storing instructions (Popovic FIG. 2, memory 134); and
one or more processors (Popovic FIG. 2, processor 130) configured to execute the instructions to:
receive a first image of the surgical scene, captured by the medical image capture device from a first viewpoint (Popovic paragraph [0035], “image acquisition device 112 configured to acquire live images at the surgical site S”) and additional information of the scene (Popovic paragraph [0042], “The device can further be positioned using a robotic positioner and computer controller. The robotic positioning allows for tracking of the device motion with respect to anatomy”), and at least one of surgical information indicative of the status of the surgery, position data of objects in the surgical environment, movement data of objects in the surgical environment, information regarding a type of surgical tool used by the user (Popovic paragraph [0035[, “one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure”; Examiner interprets an image of the surgical instrument as providing a visual identification of the surgical tool), lighting information regarding the surgical environment, and patient information indicative of the status of the patient;
determine, for the medical image capture device, in accordance with the additional information and previous viewpoint information of surgical scenes, being viewpoints which have been used in previous surgical procedures, (Popovic paragraph [0051],”the processor 130 is generally configured to…process and store the acquired live images, e.g., in the memory 134 and/or the CRM 136, so that the processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”), one or more candidate viewpoints from which to obtain an image of the surgical scene (Popovic FIG. 1, anatomical target T, wherein the one or more candidate viewpoints are determined to provide a viewpoint having a quantifiable advantage over the first viewpoint (Popovic paragraph [0051], ”processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”; Examiner interprets the viewpoint used to determine a path to the target T as having a quantifiable advantage over other viewpoints that do not determine a path to the target T);
provide, in accordance with the first image of the surgical scene, for each of the one or more candidate viewpoints, a simulated image of the surgical scene from the candidate viewpoint (Popovic FIG. 1, virtual reality device 120; Popovic paragraph [0059], “the user 333 may perform a virtual walk-through of the anatomy”);
control the medical image capture device to obtain an image of the surgical scene from the candidate viewpoint corresponding to a selection of one of the one or more simulated images of the surgical scene (Popovic paragraph [0035], “the surgical robot system 100 includes at least one robot 101, a control unit 106, and a virtual reality (VR) device 120. The robot 101 is configured to operate one or more end-effectors to be positioned at a surgical site S within a patient P, including at least one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure. Internal surgical procedures may include minimally invasive surgeries or natural orifice surgeries, for instance, involving an anatomical target T”).
Popovic discloses a system that guides a surgical robot system using a combination of live images of a patent and a virtual reality (VR) device to guide an endoscope to a target via a predetermined path. (See Popovic Abstract.) As described in Popovic paragraph [0062], “the control unit 106 may be broadly defined herein as any controller which is structurally configured to provide one or more control commands to control the acquisition and processing of live and preoperative images related to the flexible distal portion 103 of the robot 101 at the surgical site S, and the anatomical object or target T, and utilize tracking information related to selection of the target T from the VR device 130 to determine a path to the target T and to further control the flexible distal portion 103.” (Emphasis added.) Thus, the VR system uses preoperative images to create the VR device, which is used to create candidate viewpoints that lead to the creation of a simulated image of the surgical scene. However, Popovic does not explicitly disclose that these preoperative images are from previous surgical procedures, and not from previous non-surgical procedures (e.g., a CT scan).
Tominaga teaches viewpoints which have been used in previous endoscopic surgical procedures (Tominaga FIG. 17, images 310T1, 310T2, 310T2; Tominaga paragraphs [0114] – [0115], “In these fluorescent images 310T1, 310T2, 310T3, . . . , it is preferable to display reference regions 312T1, 312T2, 312T3, . . . at the respective imaging times or an initial region of interest 313…It is also possible to display the temporal information using the results (image signals used for the generation of the past fluorescent images, setting of the reference region, setting of the region of interest, and the like) of the observation in the past, such as one week before or one month before. In the case of displaying the temporal information based on the past observation result, it is preferable to provide a database 307 in which past observation results are stored”; see also Tominaga paragraph [0127], “The configuration of the fluorescence observation devices of each of the first to fourth embodiments can be mounted in an endoscopic system. As shown in FIG. 22, an endoscopic system 510 includes an endoscope 512”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Tominaga’s past surgical images with the system disclosed by Popovic. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that detects pathological changes to an area of interest.
Regarding Claim 2, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is further configured to perform an assessment of the capability of the candidate viewpoint for use by a user and exclude those candidate viewpoints which are unsuitable for use by the user in the surgical scene (Popovic paragraph [0059], “processor 130 is configured to process forward-looking images from the forward-looking camera, to process the determined head-tracking and/or eye-tracking data from the HMD in a continuous travel mode, and to transmit the robot control signals to cause the robot to move the forward-looking camera in the continuous travel mode in the patient based upon the forward-looking images”).
Regarding Claim 3, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is further configured to:
provide the one or more simulated images of the surgical scene for display to a user (Popovic FIG. 4B, Popovic paragraph [0059], the user 333 may perform a virtual walk-through of the anatomy”);
receive, from the user, a selection of one of the one or more simulated images of the surgical scene (Popovic paragraph [0019], “Using the live images and the head motion and/or eye movement detection improves usability of the robot by simulating experience during conventional surgery, where the surgeon moves his or her head and/or directs eye movement toward the area (target) on which the surgeon is focused”).
Regarding Claim 4, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses:
wherein the one or more processors configured to execute the instruction to is further configured to control the position and/or orientation of an articulated arm supporting the medical image capture device to control the medical image capture device to obtain an image of the surgical scene from the candidate viewpoint corresponding to the selection of one of the one or more simulated images of the surgical scene (Popovic FIG. 1, surgical robot system 100; Popovic paragraph [0035], “surgical robot system 100 includes at least one robot 101, a control unit 106, and a virtual reality (VR) device 120. The robot 101 is configured to operate one or more end-effectors to be positioned at a surgical site S within a patient P, including at least one image acquisition device 112 configured to acquire live images at the surgical site S”).
Regarding Claim 5, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to analyse the candidate viewpoints in accordance with a predetermined metric, and display the top N candidate viewpoints to the user for selection (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 6, Popovic in view of Tominaga teach the features of Claim 5, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to analyse the candidate viewpoints in accordance with a comparison of the candidate viewpoints with one or more viewpoint preferences of the user as the predetermined metric (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 7, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to evaluate the candidate viewpoints in accordance with a predetermined metric, and control a display to display, based on the evaluation, at least a subset of the candidate viewpoints (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 8, Popovic in view of Tominaga teach the features of Claim 5, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to evaluate one or more quantifiable features of the simulated images and arrange the candidate viewpoints in accordance with a result of the evaluation as the predetermined metric (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 9, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to determine the capability of the image capture device to achieve the candidate viewpoints and exclude those candidate viewpoints which are unsuitable for the image capture device (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 12, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to receive an interaction with a simulated image of the surgical scene and, on the basis of that interaction, update one or more properties of the corresponding candidate viewpoint and/or the simulated image of the surgical scene (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”; Popovic paragraph [0037], “an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 13, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to determine the viewpoint information in accordance with at least one of previous viewpoints selected by the apparatus for a surgical scene corresponding to the additional information and previous viewpoints used by other users for a surgical scene corresponding to the additional information (Popovic FIG. 1, display unit 121; Popovic paragraph [0036], “VR device 120 is configured to display the acquired live images on a VR display unit 121 to be viewed by the user”).
Regarding Claim 14, Popovic in view of Tominaga teach the features of Claim 12, as described above.
Popovic further discloses wherein the viewpoint information includes a position information and/or orientation information of the image capture device (Popovic paragraph [0063], “processor 130 may be configured to process additional positional tracking information of the rigid proximal portion 102 of the surgical robot 101 from a position tracking system (not shown) to determine motion of the rigid proximal portion 102”; Popovic paragraph [0044], “rigid proximal portion 102 may be an endoscope”).
Regarding Claim 16, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to control the image capture device to obtain an image from a number of discrete predetermined locations within the surgical scene as an initial calibration in order to obtain the previous viewpoints of the surgical scene (Popovic FIG. 4A and FIG. 4B; Popovic paragraph [0059], “the image acquisition device 112 includes a forward-looking camera as part of a robot, which may be a multi-link or concentric arc robot holding a rigid endoscope for minimally invasive surgery or snake-like or catheter-like robot 1000, as shown in FIG. 4B, for traversing natural orifices 150 (e.g., bronchoscopy). In the depicted embodiment, the user 333 may perform a virtual walk-through of the anatomy, and the robot 1000 is following along this path. The target selection and motion are continuous.”).
Regarding Claim 17, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the candidate viewpoints include at least one of a candidate location (Popovic FIG. 2, depicting endoscope 142 located in a location of depicting endoscope in surgical site S) and/or a candidate imaging property of the image capture device (Popovic paragraph [0033], “medical images may include 2D or 3D images such as those obtained via an endoscopic camera provided on a distal end of an endoscope, or via a forward-looking camera provided at the distal end of a robot (e.g. as the end effector). Also, live images may include still or video images captured through medical imaging during the minimally invasive procedure. Other medical imaging may be incorporated during the surgical process, such as images obtained by, X-ray, ultrasound, and/or magnetic resonance, for example, for a broader view of the surgical site and surrounding areas.”).
Regarding Claim 18, Popovic in view of Tominaga teach the features of Claim 17, as described above.
Popovic further discloses wherein the imaging property includes at least one of an image zoom, an image focus, an image aperture, an image contrast, an image brightness, and/or an imaging type of the image capture device (Popovic paragraph [0033], “medical images may include 2D or 3D images such as those obtained via an endoscopic camera provided on a distal end of an endoscope, or via a forward-looking camera provided at the distal end of a robot (e.g. as the end effector). Also, live images may include still or video images captured through medical imaging during the minimally invasive procedure. Other medical imaging may be incorporated during the surgical process, such as images obtained by, X-ray, ultrasound, and/or magnetic resonance, for example, for a broader view of the surgical site and surrounding areas.”).
Regarding Claim 19, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic further discloses wherein the one or more processors configured to execute the instruction to is configured to receive at least one of a touch input, a keyboard input or a voice input as the selection of the one of the one or more simulated images of the surgical scene (Popovic FIG. 2, input device(s) 126; Popovic paragraph [0037], “input device 126 may include one or more of a touch screen, keyboard, mouse, trackball, touchpad, or voice command interface, for example. In the present embodiment, the user may use the input device 126 to enter specific commands, such as sending an acknowledgment signal to the processor 130 to confirm a selected target T as determined by the processor 130”).
Regarding Claim 20, Popovic discloses:
A method of controlling a medical image capture device during surgery, the method comprising:
receiving a first image of the surgical scene, captured by the medical image capture device from a first viewpoint (Popovic paragraph [0035], “image acquisition device 112 configured to acquire live images at the surgical site S”), and additional information of the scene (Popovic paragraph [0042], “The device can further be positioned using a robotic positioner and computer controller. The robotic positioning allows for tracking of the device motion with respect to anatomy”), and at least one of surgical information indicative of the status of the surgery, position data of objects in the surgical environment, movement data of objects in the surgical environment, information regarding a type of surgical tool used by the user (Popovic paragraph [0035[, “one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure”; Examiner interprets an image of the surgical instrument as providing a visual identification of the surgical tool), lighting information regarding the surgical environment, and patient information indicative of the status of the patient;;
determining, for the medical image capture device, in accordance with the additional information and previous viewpoint information of surgical scenes, being viewpoints which have been used in previous surgical procedures, (Popovic paragraph [0051],”the processor 130 is generally configured to…process and store the acquired live images, e.g., in the memory 134 and/or the CRM 136, so that the processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”), one or more candidate viewpoints from which to obtain an image of the surgical scene (Popovic FIG. 1, anatomical target T);
providing, in accordance with the first image of the surgical scene, for each of the one or more candidate viewpoints, a simulated image of the surgical scene from the candidate viewpoint (Popovic FIG. 1, virtual reality device 120; Popovic paragraph [0059], “the user 333 may perform a virtual walk-through of the anatomy”), wherein the one or more candidate viewpoints are determined to provide a viewpoint having a quantifiable advantage over the first viewpoint (Popovic paragraph [0051], ”processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”; Examiner interprets the viewpoint used to determine a path to the target T as having a quantifiable advantage over other viewpoints that do not determine a path to the target T);
controlling the medical image capture device to obtain an image of the surgical scene from the candidate viewpoint corresponding to a selection of one of the one or more simulated images of the surgical scene (Popovic paragraph [0035], “the surgical robot system 100 includes at least one robot 101, a control unit 106, and a virtual reality (VR) device 120. The robot 101 is configured to operate one or more end-effectors to be positioned at a surgical site S within a patient P, including at least one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure. Internal surgical procedures may include minimally invasive surgeries or natural orifice surgeries, for instance, involving an anatomical target T”).
Popovic discloses a method for guiding a surgical robot system using a combination of live images of a patent and a virtual reality (VR) device to guide an endoscope to a target via a predetermined path. (See Popovic Abstract.) As described in Popovic paragraph [0062], “the control unit 106 may be broadly defined herein as any controller which is structurally configured to provide one or more control commands to control the acquisition and processing of live and preoperative images related to the flexible distal portion 103 of the robot 101 at the surgical site S, and the anatomical object or target T, and utilize tracking information related to selection of the target T from the VR device 130 to determine a path to the target T and to further control the flexible distal portion 103.” (Emphasis added.) Thus, the VR system uses preoperative images to create the VR device, which is used to create candidate viewpoints that lead to the creation of a simulated image of the surgical scene. However, Popovic does not explicitly disclose that these preoperative images are from previous surgical procedures, and not from previous non-surgical procedures (e.g., a CT scan).
Tominaga teaches viewpoints which have been used in previous endoscopic surgical procedures (Tominaga FIG. 17, images 310T1, 310T2, 310T2; Tominaga paragraphs [0114] – [0115], “In these fluorescent images 310T1, 310T2, 310T3, . . . , it is preferable to display reference regions 312T1, 312T2, 312T3, . . . at the respective imaging times or an initial region of interest 313…It is also possible to display the temporal information using the results (image signals used for the generation of the past fluorescent images, setting of the reference region, setting of the region of interest, and the like) of the observation in the past, such as one week before or one month before. In the case of displaying the temporal information based on the past observation result, it is preferable to provide a database 307 in which past observation results are stored”; see also Tominaga paragraph [0127], “The configuration of the fluorescence observation devices of each of the first to fourth embodiments can be mounted in an endoscopic system. As shown in FIG. 22, an endoscopic system 510 includes an endoscope 512”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Tominaga’s past surgical images with the method disclosed by Popovic. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that detects pathological changes to an area of interest.
Regarding Claim 21, Popovic discloses:
A non-transitory computer program product including instructions which, when the program is executed by a computer, cause the computer to carry out a method of controlling a medical image capture device during surgery (Popovic, FIG. 2, processor 130; Popovic paragraph [0062], “the control unit 106 may be broadly defined herein as any controller which is structurally configured to provide one or more control commands to control the acquisition and processing of live and preoperative images related to the flexible distal portion 103 of the robot 101 at the surgical site S, and the anatomical object or target T, and utilize tracking information related to selection of the target T from the VR device 130 to determine a path to the target T and to further control the flexible distal portion 103. Generally, the I/O circuitry 108 controls communication among elements and devices external to the control unit 106. The I/O circuitry 108 acts as an interface including necessary logic to interpret input and output signals or data to/from the processor 130, the VR device 120 and the robot 101”), the method comprising:
receiving a first image of the surgical scene, captured by the medical image capture device from a first viewpoint (Popovic paragraph [0035], “image acquisition device 112 configured to acquire live images at the surgical site S”), and additional information of the scene (Popovic paragraph [0042], “The device can further be positioned using a robotic positioner and computer controller. The robotic positioning allows for tracking of the device motion with respect to anatomy”), and at least one of surgical information indicative of the status of the surgery, position data of objects in the surgical environment, movement data of objects in the surgical environment, information regarding a type of surgical tool used by the user (Popovic paragraph [0035[, “one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure”; Examiner interprets an image of the surgical instrument as providing a visual identification of the surgical tool), lighting information regarding the surgical environment, and patient information indicative of the status of the patient;;
determining, for the medical image capture device, in accordance with the additional information and previous viewpoint information of surgical scenes, being viewpoints which have been used in previous surgical procedures, (Popovic paragraph [0051],”the processor 130 is generally configured to…process and store the acquired live images, e.g., in the memory 134 and/or the CRM 136, so that the processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”), one or more candidate viewpoints from which to obtain an image of the surgical scene (Popovic FIG. 1, anatomical target T);
providing, in accordance with the first image of the surgical scene, for each of the one or more candidate viewpoints, a simulated image of the surgical scene from the candidate viewpoint (Popovic FIG. 1, virtual reality device 120; Popovic paragraph [0059], “the user 333 may perform a virtual walk-through of the anatomy”), wherein the one or more candidate viewpoints are determined to provide a viewpoint having a quantifiable advantage over the first viewpoint (Popovic paragraph [0051], ”processor 130 is able to build a database essentially visually mapping interior portions of the patient P traversed by the endoscope 142. This database may be used subsequently to determine a path to the target T”; Examiner interprets the viewpoint used to determine a path to the target T as having a quantifiable advantage over other viewpoints that do not determine a path to the target T);
controlling the medical image capture device to obtain an image of the surgical scene from the candidate viewpoint corresponding to a selection of one of the one or more simulated images of the surgical scene (Popovic paragraph [0035], “the surgical robot system 100 includes at least one robot 101, a control unit 106, and a virtual reality (VR) device 120. The robot 101 is configured to operate one or more end-effectors to be positioned at a surgical site S within a patient P, including at least one image acquisition device 112 configured to acquire live images at the surgical site S and at least one instrument 113, such has a surgical tool for performing an internal surgical procedure. Internal surgical procedures may include minimally invasive surgeries or natural orifice surgeries, for instance, involving an anatomical target T”).
Popovic discloses a system that guides a surgical robot system using a combination of live images of a patent and a virtual reality (VR) device to guide an endoscope to a target via a predetermined path. (See Popovic Abstract.) As described in Popovic paragraph [0062], “the control unit 106 may be broadly defined herein as any controller which is structurally configured to provide one or more control commands to control the acquisition and processing of live and preoperative images related to the flexible distal portion 103 of the robot 101 at the surgical site S, and the anatomical object or target T, and utilize tracking information related to selection of the target T from the VR device 130 to determine a path to the target T and to further control the flexible distal portion 103.” (Emphasis added.) Thus, the VR system uses preoperative images to create the VR device, which is used to create candidate viewpoints that lead to the creation of a simulated image of the surgical scene. However, Popovic does not explicitly disclose that these preoperative images are from previous surgical procedures, and not from previous non-surgical procedures (e.g., a CT scan).
Tominaga teaches viewpoints which have been used in previous endoscopic surgical procedures (Tominaga FIG. 17, images 310T1, 310T2, 310T2; Tominaga paragraphs [0114] – [0115], “In these fluorescent images 310T1, 310T2, 310T3, . . . , it is preferable to display reference regions 312T1, 312T2, 312T3, . . . at the respective imaging times or an initial region of interest 313…It is also possible to display the temporal information using the results (image signals used for the generation of the past fluorescent images, setting of the reference region, setting of the region of interest, and the like) of the observation in the past, such as one week before or one month before. In the case of displaying the temporal information based on the past observation result, it is preferable to provide a database 307 in which past observation results are stored”; see also Tominaga paragraph [0127], “The configuration of the fluorescence observation devices of each of the first to fourth embodiments can be mounted in an endoscopic system. As shown in FIG. 22, an endoscopic system 510 includes an endoscope 512”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Tominaga’s past surgical images with the system/computer program product disclosed by Popovic. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that detects pathological changes to an area of interest.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al. (US PGPUB 2019/0008595 – “Popovic”) in view of Tominaga (US PGPUB 2016/0157763 – “Tominaga”) and Nishide et al. (US PGPUB 2022/0198742 – “Nishide”).
Regarding Claim 15, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic in view of Tominaga do not explicitly teach wherein the one or more processors configured to execute the instruction to is configured to use a machine learning system trained on the previous viewpoints of the surgical scene to generate the simulated images of the candidate viewpoints.
Nishide teaches wherein the one or more processors configured to execute the instruction to is configured to use a machine learning system trained on the previous viewpoints of the surgical scene to generate the simulated images of the candidate viewpoints (Nishide FIG. 3, control unit 21; Nishide FIG. 26, matching degree determination unit 624; Nishide paragraph [0099], “control unit 21 reconstructs a virtual endoscopic image on the basis of the received three-dimensional medical image; Nishide paragraph [0208], “matching degree determination unit 624 may compare the similarities between the plurality of constructed virtual endoscopic images and the endoscopic image”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Nishide’s artificial intelligence with the system/computer program product taught by Popovic in view of Tominaga. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that is able to generate a virtual image using little if any user input.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al. (US PGPUB 2019/0008595 – “Popovic”) in view of Tominaga (US PGPUB 2016/017763 – “Tominaga”) and Ambor et al. (US PGPUB 2011/0164126 – “Ambor”).
Regarding Claim 22, Popovic in view of Tominaga teach the features of Claim 1, as described above.
Popovic in view of Tominaga do not explicitly teach wherein the previous viewpoint information is from a plurality of different surgeons.
Ambor teaches wherein the previous viewpoint information is from a plurality of different surgeons (Ambor FIG. 2, reviewer list 251; Ambor paragraph [0077], “a viewer may select a complementary viewing mode 233 corresponding to one or more previous reviewers (which may be selected, for example, from list 251)”; Examiner interprets this teaching a comparing image analysis for different users in the past; see also Ambor paragraph [0070]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Ambor’s access to multiple other viewers/reviewers with the system taught by Popovic in view of Tominaga. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that enables collaborative analysis of images captured by the system, in order to improve the quality of the analysis.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al. (US PGPUB 2019/0008595 – “Popovic”) in view of Tominaga (US PGPUB 2016/017763 – “Tominaga”) and Avisar (US PGPUB 2015/0127316 – “Avisar”).
Regarding Claim 21, Popovic in view of Tominaga teach the features of Claim 12, as described above.
Although Popovic discloses updating properties of a candidate viewpoint, as described in the rejection of Claim 12 above, Popovic in view of Tominaga do not explicitly teach circuitry configured to update the simulated image in real-time in response to the interaction to adjust a property of the corresponding candidate viewpoint.
Avisar teaches circuitry configured to update the simulated image in real-time in response to the interaction to adjust a property of the corresponding candidate viewpoint (Avisar FIG. 4, Image Generator 43 and Distributed Interaction Simulation (DIS) network 44; Avisar paragraph [0069], “the Image Generator assign visual representation of each segment (shadow texture and so on), this image is connected via the DIA 44 network to a projection interface 46 and to the Host 45 that will update the image generator 43 with…the mechanical Properties and other modeling that the Host includes that all will reflect the new state that the Host will send to the IG 43 during each simulation cycle.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Avisar’s image generator and DIS network with the system taught by Popovic in view of Tominaga. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that produces updated simulation images in response to the system detecting a change in conditions that result in the generation of simulated images, in order to provide an improved simulation for comparison with real-time images.
Response to Arguments
Applicant’s arguments, see page 10, filed August 14, 2025, with respect to the interpretation of Claims 1-19 under 35 U.S.C. 112(f) have been fully considered and are persuasive in view of the present amendments. The interpretation of Claims 1-19 under 35 U.S.C. 112(f) has been withdrawn.
Applicant's arguments filed on August 14, 2025 have been fully considered but they are not persuasive. Specifically, Applicant argues on page 10 that the prior art does not teach or suggest the quantifiable advantages of a viewpoint as claimed in Claims 1 and 20-21. The advantages listed on page 10 of the August 14, 2025 arguments are not claimed. Furthermore, it is axiomatic that it is improper to import claim limitations from the specification. MPEP 2111.01(II). More specifically, the term “quantifiable advantages” is not defined in the specification, and the examples described in paragraph [0238] of the published specification only describe the features referenced in page of the August 14, 2025 response as permissible (may), features. As such, Examiner interprets “quantifiable advantages” as any subjective advantage, including but not limited to a viewpoint used to determine a path to the target T as having a quantifiable advantage over other viewpoints that do not determine a path to the target T.
On pages 11-12, Applicant argues that a person having ordinary skill in the art would not combine Popovic with Tominaga without impermissible hindsight. Examiner respectfully disagrees. As described in the rejection under 35 U.S.C. 103 of exemplary Claim 1, Popovic discloses a system that guides a surgical robot system using a combination of live images of a patent and a virtual reality (VR) device to guide an endoscope to a target via a predetermined path. More specifically, Popovic’s VR system uses preoperative images to create candidate viewpoints, as described in the rejection of Claim 1. Tominaga is only cited for teaching that these preoperative images are from previous surgical procedures, and not from previous non-surgical procedures (e.g., a CT scan). The purposes of Popovic and Tominaga described in Applicant August 14, 2025 arguments are irrelevant. Rather, it is the structure and operational features of the prior art that is significant, along with a motivation to combine, rather than “why” the prior art used these structures/operational features. As described above in the rejection of Claim 1, a person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a system that detects pathological changes to an area of interest, which then creates the claimed invention.
On page 13, Applicant argues that the appearance of an “image” is not the same as a “viewpoint”. Examiner respectfully disagrees. Specifically, if an image is shown, then the viewpoint (i.e., the camera is aimed at what is in the image) is known. There is no need for the prior art to disclose a separate step of matching an image to a viewpoint, as argued by Applicant.
As such, the rejections of Claims 1-9 and 12-23 under 35 U.S.C. 103 is maintained.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIM BOICE whose telephone number is (571)272-6565. The examiner can normally be reached Monday-Friday 9:00am - 5:00pm Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at (571)272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIM BOICE
Examiner
Art Unit 3795
/JAMES EDWARD BOICE/Examiner, Art Unit 3795
/ANH TUAN T NGUYEN/Supervisory Patent Examiner, Art Unit 3795
10/20/25