Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is responsive to the applicant’s arguments filed on 09/17/2025.
Claims 1-15, 17, and 18 are pending. Claims 1, 14, 15, and 17 are amended. Claim 16 is canceled.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/17/2025 has been entered.
Response to Arguments
Regarding claim objections:
The objection has been withdrawn in view of amendments.
Regarding rejections under 35 USC § 101:
The rejection has been withdrawn in view of amendments.
Regarding rejections under 35 USC § 103:
Applicant’s arguments regarding the 103 rejection are based on newly amended subject matter. Therefore, all arguments are addressed in the 103 rejection of the claims below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-15, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US20160166333A1), hereinafter Wang, in view of Kleiner (US20140205167A1) in further view of Crawford et al. (US20180296283A1), hereinafter Crawford.
Regarding claim 1, Wang discloses
a surgical robot having … a robot arm ([0202]: “Furthermore, the targeting systems and methods disclosed herein can be used with a variety of robot-assisted procedures. … The manual step can be carried out using the targeting system in addition to the robotic arm's positioning to improve accuracy and speed.”);
at least one network interface connectable to obtain radiological patient images generated by a radiological image scanner ([0143]: “The connection ports 1114 may be used to connect the controller 950 to other components such as the light modules, the medical imaging device to which it is attached, an external computer, or the like.”) ([0112]: “Another example of the software embodiment of step C may involve generation of a trajectory from a set of orthogonal X-ray images.”) ([0003]: “Various imaging techniques, such as X-rays, fluoroscopy, ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) play an integral role in a wide variety of medical procedures. The term “image assisted” may be used to describe medical procedures utilizing some type of imaging technique to guide the medical procedure.”) ([0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization.”);
a display device ([0115]: “The images may be displayed and options for display include, but are not limited to: the imaging device terminal (e.g. fluoroscopy screen), a diagnostic unit (e.g. PACS), a computer or electronic device (e.g. tablet) capable of displaying DICOM format images (step F).”) ([0008]: “rendering three-dimensional models of these structures, and then visually overlaying these structures on a display screen in a manner that shows the relative depth of the tissues/structures inside the patient”) ([0059]: “FIG. 30 illustrates a camera/display device such as a smartphone or tablet, displaying the targeting system”);
at least one processor ([0008]: “An image guidance system may include an information processing unit (e.g., a computer).”) ([0093]: “Such a controller may be a dedicated module, a computer, a smartphone, a tablet, or the like.”); and
at least one memory storing program code that is executed by the at least one processor to perform operations to ([0093]: “Such a controller may be a dedicated module, a computer, a smartphone, a tablet, or the like.”) ([0008]: “The information processing unit can load a patient's pre-operative and/or intra-operative images and run software that performs registration of a patient's image space to the patient's physical space and provide navigational information to the operator (e.g., surgeon).”)
obtain through the at least one network interface a first radiological patient image of cranial structure of a patient along a first plane and obtain a second radiological patient image of the cranial structure of the patient along a second plane that is angularly offset to the first plane ([0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”) ([0102]: “The solid black outline shows the imaging device taking an image at one position. The phantom outline shows the imaging device taking a second image after rotating 90 degrees.”) ([0112]: “two X-rays may be taken 90 degrees apart.”) ([0069]: “The system 10 may be well-adapted for cranial procedures”),
merge the first and second radiological patient images … to a 3D medical image of the patient in an image coordinate system ([0164]: “Thus, images taken by the cameras may be combined together with existing calibration information to generate precise three-dimensional surface maps of objects in the field of view (FOV) of the cameras.”) ([0165]: “The information from both cameras may be combined to fully calculate the three-dimensional position and orientation of the object.”) ([0112]: “Another example of the software embodiment of step C may involve generation of a trajectory from a set of orthogonal X-ray images.”) ([0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space’s coordinate system”), and
obtain a surgical trajectory plan defining an entry point on the patient’s skull and a target point in the patient’s brain captured in the merged first and second radiological patient images ([0201]: “Software integration may allow the image processing terminal (for optical based systems, this is usually a workstation connected to the camera) to be used for planning trajectories and laser position calculations.”) ([0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization. This process may be used to facilitate procedures including, but not limited to: Setting a new entry point for the desired target and recalculating the trajectory”) ([0155]: “Where the system 1210 is to be used for a cranial procedure, such as installation of an EVD, the base unit 1230 may be secured to cranial anatomy, such as the forehead (entry point on the patient’s skull).”) ([0015]: “For example, a common cranial application is a stereotactic biopsy. Traditional methods have focused on frame-based stereotactic biopsy that relies upon the application of a frame secured to the skull with sharp pins that penetrate the outer table of the skull”) ([0016]: “In this instance, the goal is to place an implant into a pre-defined area of the brain (target point in the patient’s brain).”).
Wang does not explicitly disclose
a robot base and an end-effector coupled to the robot arm, the end-effector configured to guide movement of a surgical instrument;
images obtained by the radiologic image scanner; and
alternately display in the display device the first radiological patient image and a corresponding merged digitally reconstructed radiograph (DRR) image from the 3D medical image for manual verification by a user; and
wherein the at least one processor performs operations to receive the surgical trajectory plan and to control movement of at least one motor operatively connected to move the robot arm relative to the robot base, based on the surgical trajectory plan.
However, Kleiner teaches
images obtained by the radiologic image scanner ([0003]: “The treatment planning phase may include the performance of CT scanning or other medical imaging techniques to acquire image data”) ([0030]: “The two-dimensional cross-sections x-ray scans may be obtained by rotating the diagnostic radiation source 111 in conjunction with diagnostic radiation imager 113 on around the target volume. The data obtained in the two-dimensional x-ray scans may be subsequently combined according to various algorithms to generate a three dimensional model of the target volume.”) ([0031]-[0032]: “The first imaging device 201 may be used to acquire planning images, and may be implemented as diagnostic computer tomography (CT) scanning devices, computer-assisted tomography (CAT) devices, or magnetic resonance imaging (MRI) devices. … As depicted in FIG. 2, the first imaging device 201 may be configured to generate planning imaging data for a subject.”); and
alternately displaying the DRR image corresponding to a radiological image for manual verification by a user ([0002]: “Diagnostic radiology deals with the use of various imaging modalities to aid in the diagnosis of a disease or condition in a subject. … Conventional medical imaging processes involving CT scans typically produce a series of 2-dimensional images of a target area which can be subsequently combined using computerized algorithms to generate a 3-dimensional image or model of the target area.”) ([0005]: “Manual matching is typically performed by a radiologist, technician, or other such user through a computing system to confirm the match of characteristics in a target region of a subject as displayed in a produced verification image with the same characteristics of the same subject in a previously acquired “reference” image. Typically, one or more digitally reconstructed radiographs, or “DRRs” are generated from the pre-acquired planning images and displayed to the user alongside the verification image. A user is then able to verify or reject the generated DRR as a match with the acquired verification. A confirmed match could result in a registration between the generated DRR and the acquired verification image. If rejected, additional reference images are generated from the subject's previously acquired image data for the user's review.”).
Wang and Kleiner are analogous to the claimed invention because they are in the same field of processing radiographic images of a patient.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kleiner’s teaching for obtaining radiologic images and alternately displaying the DRR image corresponding to a radiological image for manual verification by a user into Wang to generate a corresponding merged digitally reconstructed radiograph (DRR) image from radiologic images and alternately display it along with the first radiological patient image.
One of ordinary skill in the art would have been motivated to make this modification because doing so allows a user to verify or modify the generated images which would enhance the accuracy of the generated image allowing treating diseases (Kleiner, [0003]: “Therapeutic radiology or radiation oncology involves the use of radiation to treat diseases such as cancer through the directed application of radiation to targeted areas.”) (Kleiner, [0005]: “The user is able to verify the DRRs as a match to the verification image by visually matching characteristic regions, or, alternately, to dynamically generate additional DRRs that may be more suitable by actuating a portion of the generated DRR.”).
Therefore, the combination of Wang and Kleiner teaches
merge the first and second radiological patient images, obtained by the radiologic image scanner, to a 3D medical image of the patient in an image coordinate system (Wang, [0164]: “Thus, images taken by the cameras may be combined together with existing calibration information to generate precise three-dimensional surface maps of objects in the field of view (FOV) of the cameras.”) (Wang, [0165]: “The information from both cameras may be combined to fully calculate the three-dimensional position and orientation of the object.”) (Wang, [0112]: “Another example of the software embodiment of step C may involve generation of a trajectory from a set of orthogonal X-ray images.”) (Wang, [0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space’s coordinate system”) (Kleiner, [0003]: “The treatment planning phase may include the performance of CT scanning or other medical imaging techniques to acquire image data”) (Kleiner, [0030]: “The two-dimensional cross-sections x-ray scans may be obtained by rotating the diagnostic radiation source 111 in conjunction with diagnostic radiation imager 113 on around the target volume. The data obtained in the two-dimensional x-ray scans may be subsequently combined according to various algorithms to generate a three dimensional model of the target volume.”) (Kleiner, [0031]-[0032]: “The first imaging device 201 may be used to acquire planning images, and may be implemented as diagnostic computer tomography (CT) scanning devices, computer-assisted tomography (CAT) devices, or magnetic resonance imaging (MRI) devices. … As depicted in FIG. 2, the first imaging device 201 may be configured to generate planning imaging data for a subject.”),
alternately display in the display device the first radiological patient image and a corresponding merged digitally reconstructed radiograph (DRR) image from the 3D medical image for manual verification by a user (Wang, [0164]: “Thus, images taken by the cameras may be combined together with existing calibration information to generate precise three-dimensional surface maps of objects in the field of view (FOV) of the cameras.”) (Wang, [0165]: “The information from both cameras may be combined to fully calculate the three-dimensional position and orientation of the object.”) (Wang, [0112]: “Another example of the software embodiment of step C may involve generation of a trajectory from a set of orthogonal X-ray images.”) (Wang, [0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space’s coordinate system”) (Kleiner, [0002]: “Diagnostic radiology deals with the use of various imaging modalities to aid in the diagnosis of a disease or condition in a subject. … Conventional medical imaging processes involving CT scans typically produce a series of 2-dimensional images of a target area which can be subsequently combined using computerized algorithms to generate a 3-dimensional image or model of the target area.”) (Kleiner, [0005]: “Manual matching is typically performed by a radiologist, technician, or other such user through a computing system to confirm the match of characteristics in a target region of a subject as displayed in a produced verification image with the same characteristics of the same subject in a previously acquired “reference” image. Typically, one or more digitally reconstructed radiographs, or “DRRs” are generated from the pre-acquired planning images and displayed to the user alongside the verification image. A user is then able to verify or reject the generated DRR as a match with the acquired verification. A confirmed match could result in a registration between the generated DRR and the acquired verification image. If rejected, additional reference images are generated from the subject's previously acquired image data for the user's review.”).
Wang/Kleiner still does not explicitly teach
a robot base and an end-effector coupled to the robot arm, the end-effector configured to guide movement of a surgical instrument;
wherein the at least one processor performs operations to receive the surgical trajectory plan and to control movement of at least one motor operatively connected to move the robot arm relative to the robot base, based on the surgical trajectory plan;
However, Crawford teaches
a surgical robot having a robot base, a robot arm coupled to the robot base, and an end-effector coupled to the robot arm ([0065]: “Surgical robot system 600 may comprise end-effector 602, robot arm 604, guide tube 606, instrument 608, and robot base 610. … In an exemplary operation, robot base 610 may be configured to be in electronic communication with robot arm 604 and end-effector”) ([0045]: “End-effector 112 may be coupled to the robot arm 104 and controlled by at least one motor.”);
the end-effector configured to guide movement of a surgical instrument ([0045]: “In some embodiments, end-effector 112 can comprise any known structure for effecting the movement of the surgical instrument 608 in a desired manner.”); and
wherein the at least one processor performs operations to receive the surgical trajectory plan and to control movement of at least one motor operatively connected to move the robot arm relative to the robot base, based on the surgical trajectory plan ([0048]: “In exemplary embodiments, system 100 can use tracking information collected from each of the marked objects to calculate the orientation and location, for example, of the end-effector 112, the surgical instrument 608 (e.g., positioned in the tube 114 of the end-effector 112)”) ([0065]: “Trajectory 614 may represent a path of movement that instrument tool 608 is configured to travel”) ([0045]: “End-effector 112 may be coupled to the robot arm 104 and controlled by at least one motor.”) ([0063]: “Motion control subsystem 506 may be configured to physically move vertical column 312, upper arm 306, lower arm 308, or rotate end-effector 310. The physical movement may be conducted through the use of one or more motors 510-518.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the surgical robot of Wang/Kleiner to incorporate the surgical robot of Crawford to provide a surgical robot with a robot base, a robot arm coupled to the robot base, and an end-effector coupled to the robot arm, the end-effector and track the pose of the robot.
One of ordinary skill in the art would have been motivated to make this modification because using such a robot and being able to track poses of and control the robot assist a surgeon in an operation (Crawford, [0065]: “In an exemplary operation, robot base 610 may be configured to be in electronic communication with robot arm 604 and end-effector 602 so that surgical robot system 600 may assist a user (for example, a surgeon) in operating on the patient 210.”).
Therefore, the combination of Wang/Kleiner and Crawford teaches
a surgical robot having a robot base, a robot arm coupled to the robot base, and an end-effector coupled to the robot arm, the end-effector configured to guide movement of a surgical instrument (Wang, [0202]: “Furthermore, the targeting systems and methods disclosed herein can be used with a variety of robot-assisted procedures. … The manual step can be carried out using the targeting system in addition to the robotic arm's positioning to improve accuracy and speed.”) (Crawford, [0065]: “Surgical robot system 600 may comprise end-effector 602, robot arm 604, guide tube 606, instrument 608, and robot base 610. … In an exemplary operation, robot base 610 may be configured to be in electronic communication with robot arm 604 and end-effector”) (Crawford, [0045]: “End-effector 112 may be coupled to the robot arm 104 and controlled by at least one motor.”) (Crawford, [0045]: “In some embodiments, end-effector 112 can comprise any known structure for effecting the movement of the surgical instrument 608 in a desired manner.”); and
wherein the at least one processor performs operations to receive the surgical trajectory plan and to control movement of at least one motor operatively connected to move the robot arm relative to the robot base, based on the surgical trajectory plan (Crawford, [0048]: “In exemplary embodiments, system 100 can use tracking information collected from each of the marked objects to calculate the orientation and location, for example, of the end-effector 112, the surgical instrument 608 (e.g., positioned in the tube 114 of the end-effector 112)”) (Crawford, [0065]: “Trajectory 614 may represent a path of movement that instrument tool 608 is configured to travel”) (Crawford, [0045]: “End-effector 112 may be coupled to the robot arm 104 and controlled by at least one motor.”) (Crawford, [0063]: “Motion control subsystem 506 may be configured to physically move vertical column 312, upper arm 306, lower arm 308, or rotate end-effector 310. The physical movement may be conducted through the use of one or more motors 510-518.”).
Regarding claim 2, Wang/Kleiner/Crawford teaches
wherein the first and second radiological patient images are angularly offset in a range between 90° and 30° (Wang, [0102]: “The solid black outline shows the imaging device taking an image at one position. The phantom outline shows the imaging device taking a second image after rotating 90 degrees.”) (Wang, [0112]: “two X-rays may be taken 90 degrees apart.”).
Regarding claim 3, Wang/Kleiner/Crawford teaches
generate a three dimensional graphical representation in the image coordinate system of the cranial structure captured in the first and second radiological patient images (Wang, [0164]: “Thus, images taken by the cameras may be combined together with existing calibration information to generate precise three-dimensional surface maps of objects in the field of view (FOV) of the cameras.”) (Wang, [0165]: “The information from both cameras may be combined to fully calculate the three-dimensional position and orientation of the object.”).
Regarding claim 4, Wang/Kleiner/Crawford teaches
receive a user designation of the entry point on the patient’s skull and the target point in the patient’s brain defined relative to the image coordinate system (Wang, [0111]: “The operator may select the entry point and the target on the cross-sectional image”) (Wang, [0177]: “This may allow for updating of anatomical information, as well as input from the user to select different entry and/or target point(s).”) (Wang, [0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space’s coordinate system, usually expressed as a Cartesian system with an arbitrary origin and principle axis.”); and
store the user designation of the entry point and the target point in the surgical trajectory plan (Wang, [0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization. This process may be used to facilitate procedures including, but not limited to: Setting a new entry point for the desired target and recalculating the trajectory”) (Wang, [0180]: “The targeting system 1310 may then receive an indication from the operator that the guide probe 1410 is now pointing at the new desired entry point. The targeting system 1310 may then recalculate the trajectory based on the position of the new desired entry point in order to keep the operator aligned with the target deep inside the patient.”).
Regarding claim 5, Wang/Kleiner/Crawford teaches
receive a user designation of the target point in the patient’s brain defined relative to the image coordinate system (Wang, [0111]: “The operator may select the entry point and the target on the cross-sectional image”) (Wang, [0177]: “This may allow for updating of anatomical information, as well as input from the user to select different entry and/or target point(s).”) (Wang, [0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space's coordinate system, usually expressed as a Cartesian system with an arbitrary origin and principle axis.”);
generate a set of preset trajectories based on the target point and a knowledgebase of cranial surgical procedures (Wang, [0198]: “Planning can be carried out on cross-sectional or planar imaging to define entry points, targets, and safe trajectories.”) (Wang, [0173]: “With knowledge of rotation and translation, the images can be transformed according to calibration data obtained beforehand, and trajectory planning and targeting can be performed.”);
receive a user selection of one of the preset trajectories (Wang, [0112]: “The ideal trajectory projections may be identified by the end user”) (Wang, [0180]: “For example, the operator may decide that a planned trajectory entry point is not desirable (e.g., because the current trajectory and/or current entry point of the planned trajectory is located over a wound, a sore, or some other kind of obstruction, such as a bandage, etc.).”);
determining the entry point on the patient’s skull based on the selected one of the preset trajectories (Wang, [0008]: “The software may also include the ability to perform multi-planar reconstructions and targeting/trajectory planning to identify specific entry points, trajectories, target zones, etc.”) (Wang, [0165]: “Setting a new entry point for the desired target and recalculating the trajectory”); and
generate the surgical trajectory plan based on the user designation of the target point and the determined the entry point (Wang, [0198]: “Planning can be carried out on cross-sectional or planar imaging to define entry points, targets, and safe trajectories.”) (Wang, [0180]: “The targeting system 1310 may then receive an indication from the operator that the guide probe 1410 is now pointing at the new desired entry point. The targeting system 1310 may then recalculate the trajectory based on the position of the new desired entry point in order to keep the operator aligned with the target deep inside the patient.”) (Wang, [0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization. This process may be used to facilitate procedures including, but not limited to: Setting a new entry point for the desired target and recalculating the trajectory”).
Regarding claim 6, Wang/Kleiner/Crawford teaches
generate a set of preset trajectories based on the target point and a knowledgebase of cranial surgical procedures (Wang, [0198]: “Planning can be carried out on cross-sectional or planar imaging to define entry points, targets, and safe trajectories.”) (Wang, [0173]: “With knowledge of rotation and translation, the images can be transformed according to calibration data obtained beforehand, and trajectory planning and targeting can be performed.”),
by displaying graphical representations of the trajectories as overlays on the first and second radiological patient images (Wang, [0190]: “FIG. 33 illustrates a screen device 2100 displaying the targeting system 2000, fiducial maker cube 1900, and patient shown in FIG. 32, including a virtual trajectory 2110, targeting line, or virtual planned trajectory. … The screen device 2100 may also utilize the 3-D map for registration with other 3-D images of the patient (e.g., CT/MRI scans) in order to create and display augmented virtual images of the patient with overlays of planned trajectories and segmented anatomical structures hidden deep inside the patient onto an image or live video stream. This can help the operator visualize, target, and plan trajectories for structures deep inside the patient. FIG. 33 also shows an overlay of a virtual trajectory 2110 targeting a structure (not shown) inside the patient with the entry point of the trajectory on the outer surface of the patient”); and
receive the user selection of one of the preset trajectories (Wang, [0112]: “The ideal trajectory projections may be identified by the end user”) (Wang, [0180]: “For example, the operator may decide that a planned trajectory entry point is not desirable (e.g., because the current trajectory and/or current entry point of the planned trajectory is located over a wound, a sore, or some other kind of obstruction, such as a bandage, etc.).”),
by receiving the user selection of one of the displayed graphical representations of the trajectories (Wang, [0112]: “The ideal trajectory projections may be identified by the end user”) (Wang, [0180]: “For example, the operator may decide that a planned trajectory entry point is not desirable (e.g., because the current trajectory and/or current entry point of the planned trajectory is located over a wound, a sore, or some other kind of obstruction, such as a bandage, etc.).”) (Wang, [0143]: “The control interface 1112 may be used by the user to change the settings of the system 910, the system 1010, or manually key in the orientations of the light sources, turn light modules on or off, manually enter the position and/or orientation of the desired trajectory, or the like.”) (Wang, [0190]: “FIG. 33 illustrates a screen device 2100 displaying the targeting system 2000, fiducial maker cube 1900, and patient shown in FIG. 32, including a virtual trajectory 2110, targeting line, or virtual planned trajectory. … The screen device 2100 may also utilize the 3-D map for registration with other 3-D images of the patient (e.g., CT/MRI scans) in order to create and display augmented virtual images of the patient with overlays of planned trajectories and segmented anatomical structures hidden deep inside the patient onto an image or live video stream. This can help the operator visualize, target, and plan trajectories for structures deep inside the patient. FIG. 33 also shows an overlay of a virtual trajectory 2110 targeting a structure (not shown) inside the patient with the entry point of the trajectory on the outer surface of the patient”).
Regarding claim 7, Wang/Kleiner/Crawford teaches
define the target point and the entry point in the surgical trajectory plan as locations relative to the image coordinate system (Wang, [0005]: “From cross-sectional imaging, a three-dimensional data set may be constructed using the first image space's coordinate system, usually expressed as a Cartesian system with an arbitrary origin and principle axis.”).
Regarding claim 8, Wang/Kleiner/Crawford teaches
define the target point and the entry point in the surgical trajectory plan as location relative to the first and second radiological patient images (Wang, [0111]: “The operator may select the entry point and the target on the cross-sectional image”).
Regarding claim 9, Wang/Kleiner teaches
… a physical surgical instrument to be used during a cranial surgical procedure on the patient (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures such as the installation of external ventricular drains (EVD's) or the like, and may be used to project a targeting line along the trajectory a surgical instrument is to follow in order to properly perform the procedure.”) (Wang, [0022]: “The targeting line may be visualized, for example, by projecting it on an instrument.”);
responsive to receipt of user inputs, control angular orientation and location of the … surgical instrument … relative to the entry point on the patient's skull and the target point in the patient's brain captured in the merged first and second radiological patient images (Wang, [0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization.”) (Wang, [0116]: “Such a visualization guide may be used to facilitate viewing of the targeting line and/or guiding of a surgical instrument along the desired trajectory.”) (Wang, [0023]: “The targeting system may then be activated to project the targeting line, thereby indicating the trajectory proximate the entry point at which the instrument is to enter the patient's anatomy.”) (Wang, [0022]: “Orientation of the instrument such that the targeting line is visible as a line on the instrument may indicate that the instrument is properly oriented along the trajectory.”) (Wang, [0017]: “There are numerous potential applications of the image-guided techniques disclosed herein … Another example includes the placement of pedicle screws in spinal surgery, which rely upon a precise trajectory and angle of insertion to prevent neurological injury and screw misplacement.”) (Wang, [0027]: “directing the angle of osteotomies, and guiding the placement of other instruments such as catheters, ultrasound probe, rigid endoscopes, etc.”); and
store an indication of the angular orientation and location of the graphical surgical instrument in the surgical trajectory plan (Wang, [0008]: “The software may also include the ability to perform multi-planar reconstructions and targeting/trajectory planning to identify specific entry points, trajectories, target zones, etc.”) (Wang, [0116]: “Such a visualization guide may be used to facilitate viewing of the targeting line and/or guiding of a surgical instrument along the desired trajectory.”) (Wang, [0023]: “The targeting system may then be activated to project the targeting line, thereby indicating the trajectory proximate the entry point at which the instrument is to enter the patient's anatomy.”) (Wang, [0022]: “Orientation of the instrument such that the targeting line is visible as a line on the instrument may indicate that the instrument is properly oriented along the trajectory.”).
Wang/Kleiner does not explicitly teach a graphical surgical instrument and displaying the graphical surgical instrument.
However, Crawford teaches a graphical surgical instrument and displaying it ([0088]: “The objects may be tracked through graphical representations of the surgical instrument 608 on the images of the targeted anatomical structure.”) ([0061]: “Computer subsystem 504 includes computer 408, display 304, and speaker 536. Computer 504 includes an operating system and software to operate system 300. Computer 504 may receive and process information from other components (for example, tracking subsystem 532, platform subsystem 502, and/or motion control subsystem 506) in order to display information to the user.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the surgical instrument of Wang/Kleiner to incorporate the graphical surgical instrument representing a physical surgical instrument of Crawford to provide a graphical surgical instrument representing a physical surgical instrument, display it, control its angular orientation and location, and reflect that information in a surgical trajectory plan.
One of ordinary skill in the art would have been motivated to make this modification because using a graphical surgical instrument allows objects such markers to be overlaid on the image space and helps tracking the objects (Crawford, [0088]: “At steps 1514 and 1516, the navigation space may be overlaid on the image space and objects with markers visible in the navigation space (for example, surgical instruments 608 with optical markers 804). The objects may be tracked through graphical representations of the surgical instrument 608 on the images of the targeted anatomical structure.”).
Therefore, the combination of Wang/Kleiner and Crawford teaches
display a graphical surgical instrument representing a physical surgical instrument to be used during a cranial surgical procedure on the patient (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures such as the installation of external ventricular drains (EVD's) or the like, and may be used to project a targeting line along the trajectory a surgical instrument is to follow in order to properly perform the procedure.”) (Crawford, [0088]: “The objects may be tracked through graphical representations of the surgical instrument 608 on the images of the targeted anatomical structure.”) (Crawford, [0061]: “Computer subsystem 504 includes computer 408, display 304, and speaker 536. Computer 504 includes an operating system and software to operate system 300. Computer 504 may receive and process information from other components (for example, tracking subsystem 532, platform subsystem 502, and/or motion control subsystem 506) in order to display information to the user.”);
responsive to receipt of user inputs, control angular orientation and location of the graphical surgical instrument displayed relative to the entry point on the patient’s skull and the target point in the patient's brain captured in the merged first and second radiological patient images (Wang, [0165]: “The information can be passed to a computer and used in a manner that best facilitates trajectory visualization.”) (Wang, [0116]: “Such a visualization guide may be used to facilitate viewing of the targeting line and/or guiding of a surgical instrument along the desired trajectory.”) (Wang, [0023]: “The targeting system may then be activated to project the targeting line, thereby indicating the trajectory proximate the entry point at which the instrument is to enter the patient's anatomy.”) (Wang, [0022]: “Orientation of the instrument such that the targeting line is visible as a line on the instrument may indicate that the instrument is properly oriented along the trajectory.”) (Wang, [0017]: “There are numerous potential applications of the image-guided techniques disclosed herein … Another example includes the placement of pedicle screws in spinal surgery, which rely upon a precise trajectory and angle of insertion to prevent neurological injury and screw misplacement.”) (Wang, [0027]: “directing the angle of osteotomies, and guiding the placement of other instruments such as catheters, ultrasound probe, rigid endoscopes, etc.”) (Wang, [0155]: “Where the system 1210 is to be used for a cranial procedure, such as installation of an EVD, the base unit 1230 may be secured to cranial anatomy, such as the forehead (entry point on the patient’s skull).”) (Wang, [0015]: “For example, a common cranial application is a stereotactic biopsy. Traditional methods have focused on frame-based stereotactic biopsy that relies upon the application of a frame secured to the skull with sharp pins that penetrate the outer table of the skull”) (Wang, [0016]: “In this instance, the goal is to place an implant into a pre-defined area of the brain (target point in the patient’s brain).”) (Crawford, [0088]: “The objects may be tracked through graphical representations of the surgical instrument 608 on the images of the targeted anatomical structure.”).
Regarding claim 10, Wang/Kleiner teaches
determine occurrence of a first condition when a marker tracking camera can track reflective markers that are on a cranial fluoroscopy registration fixture (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0197]: “Appropriate planning can be carried out on cross-sectional imaging pre-operatively or intra-operatively on the fluoroscopy images.”) (Wang, [0085]: “These posts 234 may be designed to engage registration markers or fiducials which are commonly used by various image guidance systems.”) (Wang, [0009]: “The markers can be small spheres of a pre-defined diameter coated in a reflective coating that may be optimized for the wavelength of infrared radiation.”);
markers attached to a robot arm (Wang, [0085]: “FIGS. 5A-5B, the template may include a baseplate 228 with plurality of posts 234 that protrude from the bottom portion 233. … In some cases, the posts 234 themselves may act as registration markers.”) (Wang, [0203]: “Alternatively, a targeting system as described herein may be mounted on the end of a robotic arm.”); and
… allow operations to be performed to obtain the first radiological patient image of the cranial structure of the patient along a first plane and to obtain the second radiological patient image of the cranial structure of the patient along a second plane that is angularly offset to the first plane (Wang, [0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”) (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures”).
Wang/Kleiner does not explicitly teach dynamic reference base markers.
However, Crawford teaches using dynamic reference base markers ([0080]: “To track the position of the patient 210, a patient tracking device 116 may include a patient fixation instrument 1402 to be secured to a rigid anatomical structure of the patient 210 and a dynamic reference base (DRB) 1404 may be securely attached to the patient fixation instrument 1402. … Dynamic reference base 1404 may contain markers 1408 that are visible to tracking devices, such as tracking subsystem 532.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the markers of Wang/Kleiner to incorporate the dynamic reference base markers of Crawford to provide the system using dynamic reference base markers.
One of ordinary skill in the art would have been motivated to make this modification because dynamic reference base markers allows to track a targeted anatomical structure and register with the location of the targeted anatomical structure (Crawford, [0081]: “In order to track the targeted anatomical structure, dynamic reference base 1404 is associated with the targeted anatomical structure through the use of a registration fixture that is temporarily placed on or near the targeted anatomical structure in order to register the dynamic reference base 1404 with the location of the targeted anatomical structure.”).
Therefore, the combination of Wang/Kleiner and Crawford teaches
determine occurrence of a second condition when the marker tracking camera can track dynamic reference base markers attached to a robot arm and/or an end-effector of a surgical robot (Wang, [0085]: “FIGS. 5A-5B, the template may include a baseplate 228 with plurality of posts 234 that protrude from the bottom portion 233. … In some cases, the posts 234 themselves may act as registration markers.”) (Wang, [0203]: “Alternatively, a targeting system as described herein may be mounted on the end of a robotic arm.”) (Crawford, [0080]: “To track the position of the patient 210, a patient tracking device 116 may include a patient fixation instrument 1402 to be secured to a rigid anatomical structure of the patient 210 and a dynamic reference base (DRB) 1404 may be securely attached to the patient fixation instrument 1402. … Dynamic reference base 1404 may contain markers 1408 that are visible to tracking devices, such as tracking subsystem 532.”) (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”); and
while both the first and second conditions continue to occur, allow operations to be performed to obtain the first radiological patient image of the cranial structure of the patient along a first plane and to obtain the second radiological patient image of the cranial structure of the patient along a second plane that is angularly offset to the first plane (Wang, [0085]: “FIGS. 5A-5B, the template may include a baseplate 228 with plurality of posts 234 that protrude from the bottom portion 233. … In some cases, the posts 234 themselves may act as registration markers.”) (Wang, [0203]: “Alternatively, a targeting system as described herein may be mounted on the end of a robotic arm.”) (Crawford, [0080]: “To track the position of the patient 210, a patient tracking device 116 may include a patient fixation instrument 1402 to be secured to a rigid anatomical structure of the patient 210 and a dynamic reference base (DRB) 1404 may be securely attached to the patient fixation instrument 1402. … Dynamic reference base 1404 may contain markers 1408 that are visible to tracking devices, such as tracking subsystem 532.”) (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”) (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures”).
Regarding claim 11, Wang/Kleiner/Crawford teaches
after a registration fixture includes fiducials is attached to cranial structure of the patient (Wang, Figs. 5A and 5B shows the baseplate (registration fixture)) (Wang, [0026]: “Furthermore, the addition of an optical tracker/reference/fiducial during or after registration allows patient anatomy to move independently of the targeting system while allowing the patient anatomy to be tracked and the registration to be continually updated.”) (Wang, [0085]: “These posts 234 may be designed to engage registration markers or fiducials which are commonly used by various image guidance systems.”) (Wang, [0090]: “Such fiducial markers may be attached, for example, through the aid of a baseplate 228 such as that of FIGS. 5A-5B, as set forth above. The feet may take the form of posts 334, which may register in such fiducial markers or other registration attachments.”) (Wang, [0074]: “For example, in a neurosurgical setting, the base component 13 of the system 10 may be attached to a patient's forehead with the targeting area covering the convexity of the cranium.”),
obtain the first radiological patient image of the cranial structure of the patient and the fiducials along a first plane and to obtain the second radiological patient image of the cranial structure of the patient and the fiducials along a second plane that is angularly offset to the first plane (Wang, [0112]: “After attaching the reference marker (fiducials or baseplate), two X-rays may be taken 90 degrees apart.”) (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”) (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker).
Regarding claim 12, Wang/Kleiner teaches
register locations of the fiducials on a registration frame to locations of the fiducials captured in the first and second radiological patient images (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker 1900 and continuously update the orientation of a three-dimensional map in space based on a current position of the fiducial marker 1900, and, based on the orientation of the three-dimensional map, determine an updated orientation of a first light source and a second light source to indicate an updated targeting line and an updated trajectory.”) (Wang, [0006]: “After the two coordinate systems have been established, the image space may be correlated to the physical space through a process known as registration. Registration refers to the coordinate transformation of one space into another.”) (Wang, [0011]: “The frame may provide measurement of the physical space around the patient's head that directly correlates with the image space since the frame is simultaneously captured on the cross-sectional imaging scan.”); and
display … a three dimensional graphical representation of the cranial structure captured in the first and second radiological patient images, based on the registration of the locations on the fiducials on the registration frame to the locations of the fiducials captured in the first and second radiological patient images (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker 1900 and continuously update the orientation of a three-dimensional map in space based on a current position of the fiducial marker 1900, and, based on the orientation of the three-dimensional map, determine an updated orientation of a first light source and a second light source to indicate an updated targeting line and an updated trajectory.”) (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures”) (Wang, [0008]: “rendering three-dimensional models of these structures, and then visually overlaying these structures on a display screen in a manner that shows the relative depth of the tissues/structures inside the patient”).
Wang/Kleiner does not explicitly disclose a three dimensional graphical representation of the registration fixture overlaid on the three dimensional graphical representation of cranial structure.
However, Crawford teaches a three dimensional graphical representation of the registration fixture overlaid on the images of the targeted anatomical structure ([0085]: “a graphical representation of the registration fixture 1410 may be overlaid on the images of the targeted anatomical structure.”).
Since Wang/Kleiner teaches a registration fixture and the idea of virtually overlaying different components to plan/visualize a surgical trajectory, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the registration fixture of Wang/Kleiner to incorporate the three dimensional graphical representation of the registration fixture of Crawford to provide a three dimensional graphical representation of the registration fixture and overlay it on a three dimensional graphical representation of cranial structure.
One of ordinary skill in the art would have been motivated to make this modification because doing so allows objects such markers to be overlaid on the image space and helps tracking the objects (Crawford, [0088]: “At steps 1514 and 1516, the navigation space may be overlaid on the image space and objects with markers visible in the navigation space (for example, surgical instruments 608 with optical markers 804). The objects may be tracked through graphical representations of the surgical instrument 608 on the images of the targeted anatomical structure.”).
Therefore, the combination of Wang/Kleiner and Crawford teaches
display a three dimensional graphical representation of the registration fixture overlaid on a three dimensional graphical representation of the cranial structure captured in the first and second radiological patient images, based on the registration of the locations on the fiducials on the registration frame to the locations of the fiducials captured in the first and second radiological patient images (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker 1900 and continuously update the orientation of a three-dimensional map in space based on a current position of the fiducial marker 1900, and, based on the orientation of the three-dimensional map, determine an updated orientation of a first light source and a second light source to indicate an updated targeting line and an updated trajectory.”) (Wang, [0069]: “The system 10 may be well-adapted for cranial procedures”) (Wang, [0008]: “rendering three-dimensional models of these structures, and then visually overlaying these structures on a display screen in a manner that shows the relative depth of the tissues/structures inside the patient”) (Crawford, [0085]: “a graphical representation of the registration fixture 1410 may be overlaid on the images of the targeted anatomical structure.”).
Regarding claim 13, Wang/Kleiner/Crawford teaches
the registration fixture further includes reflective markers (Wang, Figs. 5A and 5B shows the baseplate (registration fixture)) (Wang, [0026]: “Furthermore, the addition of an optical tracker/reference/fiducial during or after registration allows patient anatomy to move independently of the targeting system while allowing the patient anatomy to be tracked and the registration to be continually updated.”) (Wang, [0085]: “These posts 234 may be designed to engage registration markers or fiducials which are commonly used by various image guidance systems.”) (Wang, [0009]: “The markers can be small spheres of a pre-defined diameter coated in a reflective coating that may be optimized for the wavelength of infrared radiation.”); and
the at least one processor further performs operations to: receive locations of the reflective markers tracked by a camera tracking system relative to an optical coordinate system (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0009]: “The method of tracking in this example can be passive or active. In passive tracking, the system can emit infrared radiation (usually through a ring of infrared light emitting diodes, or LED's, mounted around each camera) and passive optical markers can reflect the radiation back to the cameras to allow the markers to be seen by the cameras.”) (Wang, [0009]: “The markers can be small spheres of a pre-defined diameter coated in a reflective coating that may be optimized for the wavelength of infrared radiation.”); and
register locations of the fiducials on the registration frame in the image coordinate system to locations of the reflective markers in the optical coordinate system (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker 1900 and continuously update the orientation of a three-dimensional map in space based on a current position of the fiducial marker 1900, and, based on the orientation of the three-dimensional map, determine an updated orientation of a first light source and a second light source to indicate an updated targeting line and an updated trajectory.”) (Wang, [0009]: “The markers can be small spheres of a pre-defined diameter coated in a reflective coating that may be optimized for the wavelength of infrared radiation.”) (Wang, [0005]-[0006]: “The tracking device may have its own coordinate system which may be different from that of the “image space.” … Thus, the tracking device or reference may be used for spatial recognition to read the coordinates of any point in three-dimensional space and allow accurate tracking of the physical space around the patient. After the two coordinate systems have been established, the image space may be correlated to the physical space through a process known as registration. Registration refers to the coordinate transformation of one space into another.”).
The already provided combination is applicable.
Regarding claim 14, Wang/Kleiner/Crawford teaches
after a registration fixture is attached to the cranial structure of the patient (Wang, [0074]: “For example, in a neurosurgical setting, the base component 13 of the system 10 may be attached to a patient's forehead with the targeting area covering the convexity of the cranium.”),
obtain the first radiological patient image of the cranial structure of the patient and the registration fixture along a first plane and to obtain the second radiological patient image of the cranial structure of the patient and the registration fixture along a second plane that is angularly offset to the first plane (Wang, [0112]: “After attaching the reference marker (fiducials or baseplate), two X-rays may be taken 90 degrees apart.”) (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”) (Wang, [0188]: “Accordingly, in at least one embodiment, a first camera and a second camera may be configured to capture image data of the fiducial marker 1900 and a controller may be configured to receive the image data of the fiducial marker) (Wang, [0085]: “FIGS. 5A-5B, the template may include a baseplate 228 with plurality of posts 234 that protrude from the bottom portion 233. … In some cases, the posts 234 themselves may act as registration markers.”).
Examiner notes that since posts of the baseplate (registration fixture) themselves can be markers, capturing the images of the markers corresponds to capturing the registration fixture together.
Regarding claim 15, Wang/Kleiner teaches
a camera configured to output tracking information indicating locations of reflective markers on the surgical robot and locations of reflective markers on a registration fixture attached to the cranial structure of the patient (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0009]: “The method of tracking in this example can be passive or active. In passive tracking, the system can emit infrared radiation (usually through a ring of infrared light emitting diodes, or LED's, mounted around each camera) and passive optical markers can reflect the radiation back to the cameras to allow the markers to be seen by the cameras.”) (Wang, [0009]: “The markers can be small spheres of a pre-defined diameter coated in a reflective coating that may be optimized for the wavelength of infrared radiation.”) (Wang, [0074]: “For example, in a neurosurgical setting, the base component 13 of the system 10 may be attached to a patient's forehead with the targeting area covering the convexity of the cranium.”),
wherein the at least one processor performs operations to obtain the first and second patient images for the patient (Wang, [0163]: “In at least one embodiment, the first camera 1321 may be coupled to the base unit 1330 and configured to capture first image data of anatomical features of the patient at a first location in space. The second camera 1322 may also be coupled to the base unit 1330 and configured to capture second image data of the anatomical features of the patient at the first location in space. The second camera 1322 may also be spaced apart from the first camera 1321 by a predetermined distance to form a stereoscopic camera system.”).
Wang/Kleiner does not explicitly teach tracking pose of the end-effector.
However, Crawford teaches tracking pose of the end-effector based on the tracking information from the camera tracking system ([0048]: “In exemplary embodiments, system 100 can use tracking information collected from each of the marked objects to calculate the orientation and location, for example, of the end-effector 112, the surgical instrument 608 (e.g., positioned in the tube 114 of the end-effector 112)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the surgical robot of Wang/Kleiner to incorporate the surgical robot of Crawford to provide a surgical robot with a robot base, a robot arm coupled to the robot base, and an end-effector coupled to the robot arm, the end-effector and track the pose of the robot.
One of ordinary skill in the art would have been motivated to make this modification because using such a robot and being able to track poses of and control the robot assist a surgeon in an operation (Crawford, [0065]: “In an exemplary operation, robot base 610 may be configured to be in electronic communication with robot arm 604 and end-effector 602 so that surgical robot system 600 may assist a user (for example, a surgeon) in operating on the patient 210.”).
Therefore, the combination of Wang/Kleiner and Crawford teaches
to track pose of the end-effector relative to the cranial structure captured in the first and second patient images based on the tracking information from the camera (Wang, [0173]: “The camera system mounted on the X-ray unit could track a patient reference/fiducial marker”) (Wang, [0009]: “The method of tracking in this example can be passive or active. In passive tracking, the system can emit infrared radiation (usually through a ring of infrared light emitting diodes, or LED's, mounted around each camera) and passive optical markers can reflect the radiation back to the cameras to allow the markers to be seen by the cameras.”) (Crawford, [0048]: “In exemplary embodiments, system 100 can use tracking information collected from each of the marked objects to calculate the orientation and location, for example, of the end-effector 112, the surgical instrument 608 (e.g., positioned in the tube 114 of the end-effector 112)”).
Regarding claim 17, Wang/Kleiner/Crawford teaches
determine a target pose for the end-effector based on the surgical trajectory plan (Crawford, [0048]: “In exemplary embodiments, system 100 can use tracking information collected from each of the marked objects to calculate the orientation and location, for example, of the end-effector 112, the surgical instrument 608 (e.g., positioned in the tube 114 of the end-effector 112)”); and
generate steering information based on the target pose for the surgical trajectory plan and a present tracked pose of the end-effector indicated by the tracking information, the steering information indicating where the end-effector needs to be moved relative to the cranial structure of the patient (Crawford, [0004]: “In addition, the controller may be configured to control the robotic actuator to move the end-effector to a target trajectory relative to the anatomical volume based on co-registering the first and second coordinate systems for the first and second 3D image scans using the blood vessel node.”) (Crawford, [0121]-[0122]: “Logic for navigating and moving the robot 102 to a target trajectory is provided in the method 1100 of FIG. 16. … The robot 102 can then be commanded to move to reach the target.”) (Crawford, [0117]: “During robot movement, if the positions of the tool markers 804 (while the tool 608 is in the guide tube 1014) and the position of the single marker 1018 are detected from the tracking system, and angles/linear positions of each joint are known from encoders, then position and orientation of any section of the robot can be determined.”).
The already provided combination is applicable.
Regarding claim 18, Wang/Kleiner/Crawford teaches
control movement of the at least one motor based on the steering information to guide movement of the end-effector so the end-effector becomes positioned with the target pose relative to the cranial structure of the patient (Crawford, [0004]: “In addition, the controller may be configured to control the robotic actuator to move the end-effector to a target trajectory relative to the anatomical volume based on co-registering the first and second coordinate systems for the first and second 3D image scans using the blood vessel node.”) (Crawford, [0121]-[0122]: “Logic for navigating and moving the robot 102 to a target trajectory is provided in the method 1100 of FIG. 16. … The robot 102 can then be commanded to move to reach the target.”) (Crawford, [0117]: “During robot movement, if the positions of the tool markers 804 (while the tool 608 is in the guide tube 1014) and the position of the single marker 1018 are detected from the tracking system, and angles/linear positions of each joint are known from encoders, then position and orientation of any section of the robot can be determined.”) (Crawford, [0045]: “End-effector 112 may be coupled to the robot arm 104 and controlled by at least one motor.”) (Crawford, [0063]: “Motion control subsystem 506 may be configured to physically move vertical column 312, upper arm 306, lower arm 308, or rotate end-effector 310. The physical movement may be conducted through the use of one or more motors 510-518.”).
The already provided combination is applicable.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Liu et al. (“2D–3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery”)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEIN JEONG whose telephone number is (703)756-1549. The examiner can normally be reached M-F 9am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached on (571) 270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HEIN JEONG/Examiner, Art Unit 2147
/RENEE D CHAVEZ/Supervisory Patent Examiner, Art Unit 2186