DETAILED ACTION
Claims 1-2, 4-15, and 17-22 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/27/2025 has been entered.
Response to Arguments
Following Applicants arguments and amendments, and in light of the 2019 Patent Eligibility guidance, the 101 rejection of the Claims is Withdrawn.
See allowability section below
Following Applicants arguments and amendments, the 102 rejection of the claims is Withdrawn.
Based on the amendments made, the 102 rejection is withdrawn in light of the 103 rejection below that is based on Applicant’s amendment.
Following Applicants arguments and amendments, the 103 rejection of the claims is Maintained.
Applicant’s Argument: Applicant’s arguments directed the 103 rejection are based on newly amended subject matter.
Examiner’s Response: All arguments are addressed in the 103 rejection of the claims below.
Therefore, the 103 rejection is Maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-8 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Crawford et al. (WO 2013192598) (hereinafter “Crawford”) in view of Maurer Jr. et al USPAT 6,560,354.
Regarding claim 1, Crawford teaches a system for tracking one or more surgical landmarks comprising: an imaging device that captures images of a surgical site ([00134], The enclosed surgical robot system may include an optical tracking system comprised of cameras; [00445], the imaging device is position to allow the surgeon access to the site and so that the robot can see all the markings around the site and the possible trajectories of the surgical path);
a robotic system including at least one robotic arm configured to position the imaging device at one or more positions and orientations for capturing the images ([00505] “In some embodiments, the robot system 1 includes at least one mounted camera. For example, FIG. 81 illustrates a perspective view of a robot system including a camera arm in accordance with one embodiment of the invention. In some embodiments, to overcome issues with line of sight, it is possible to mount cameras for tracking the patient 18 and robot 15 on an arm 8210 extending from the robot … Further, in some embodiments, the joints 8210a, 8210b can be used to sense the current position of the cameras (i.e. the position of the camera arm 8200)”, where Figure 81 shows an example position and orientation of the cameras and [00220] the cameras acquire imaging data);
a processor ([00205] and Figure 34, The system can include one or more processors);
and a memory storing data for processing by the processor, the data, when processed, causing the processor to ([00205] and Figure 34, The system can further include a memory on a system bus that couples the memory and processor, ([00220]) where the memory can receive image data from cameras):
receive a first image depicting one or more surgical landmarks ([00355] and Figure 36, The surgical robot system can utilize an array of trackers and a surveillance marker that are attached to a patient and where a medical image is taken with the trackers in place; ([00177]) the medical image may include a CT scan or X-ray image);
control the robotic arm to continuously reposition the imaging device to capture a second image of a plurality of second images of the one or more surgical landmarks at an expected position based on the first image, the second image captured after the first image ([00356], Frames of real-time data during a procedure are received by the surgical robotic system that includes the positions of the array of trackers and surveillance marker, ([00220]) where the frames may include imaging data acquired by one or more cameras and ([00344]) a first medical image can occur before the procedure; [00144], the robot is actuated; [00177], a trajectory based on the images is captured; [00417] the images include the landmarks);
detect movement of at least one surgical landmark of the one or more surgical landmarks based on a comparison of the first image with the second image captured at the expected position ([00356] and Figure 36, The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example 3D distances stored in memory from when the original distance between the array of trackers and the surveillance marker was captured. This comparison may cause a detection of a shift in the array of trackers or the surveillance marker and for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan);
and generate a notification for an operating room personnel when the detected movement meets or exceeds a movement threshold ([00356], If there is a shift in the array of trackers or the surveillance marker, or if the surveillance marker offset exceeds a pre-set amount, a notification can be issued to alert an agent of a loss in movement accuracy or the surgical robot system can be halted),
wherein the notification includes at least one of an audio notification and a visual notification ([00149] “In some embodiments, the robot 15 moves into a selected position, ready for the surgeon to deliver a selected surgical instrument 35, such as, for example and without limitation, a conventional screw, a biopsy needle 8110, and the like. In some embodiments, as the surgeon works, if the surgeon inadvertently forces the end-effectuator 30 and/or surgical instrument 35 off of the desired trajectory, then the system 1 can be configured to provide an audible warning and/or a visual warning. For example, in some embodiments, the system 1 can produce audible beeps and/or display a warning message on the display means 29, such as "Warning: Off Trajectory," while also displaying the axes for which an acceptable tolerance has been exceeded”).
wherein the one or more surgical landmarks comprises … a portion of each of the corresponding elements. ([00523]-[00525], a landmark is used in the form of an organ)
Crawford does not explicitly teach wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements,
Maurer teaches wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements, (Column 2 Lines 1-26, markers are disposed on the skin and they are also implanted under the skin on the bones)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Crawford with Maurer as the references deal performing surgery, in order to implement a system that provides markers on anatomical elements and implanted in them. Maurer would modify Crawford by providing markers on anatomical elements and implanted in them. The benefit of doing so is these points and surfaces can be easily and accurately acquired in physical space using 3-D probes (e.g., articulated mechanical, electromagnetic, ultrasonic, optical), stereo video cameras, and/or laser range-finders. (Maurer Column 2 Lines 22-26)
Regarding claim 2, the combination of Crawford and Maurer teaches the limitations of claim 1. Crawford also teaches wherein the imaging device obtains images of a patient, including the second image, free of ionizing radiation ([00134], Cameras used in an optical tracking system that operate based on light can exclude ionizing radiation).
Regarding claim 4, the combination of Crawford and Maurer teaches the limitations of claim 1. Crawford also teaches, wherein detecting the movement of the at least one surgical landmark comprises comparing a position of the at least one surgical landmark in the first image to the position of the at least one surgical landmark in the second image ([00356], The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example distances stored in memory from when the surveillance marker location is originally set, where a shift in the array of trackers or surveillance marker can be detected that for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan).
Regarding claim 5, the combination of Crawford and Maurer teaches the limitations of claim 2. Crawford also teaches, wherein the first image is obtained preoperatively and the second image is obtained intraoperatively ([00355] and [00344], A first image can be a medical image scan, which can be occur before operation; ([00356] and [00220]) A second image as a frame of imaging data is obtained during procedure).
Regarding claim 6, the combination of Crawford and Maurer teaches the limitations of claim 1. Crawford also teaches wherein the imaging device is a second imaging device and the system further comprises a first imaging device, wherein the first image is obtained from the first imaging device and the second image is obtained from the second imaging device ([00355] and [00177], A first image can be a medical image obtained by a CT scanner or X-ray machine; ([00355] and [00220]) A second image can be obtained by a camera in an optical tracking system).
Regarding claim 7, the combination of Crawford and Maurer teaches the limitations of claim 6. Crawford also teaches wherein the first imaging device uses a first imaging modality and the second imaging device uses a second imaging modality different from the first imaging modality ([00355] and [00177], A first image can be a medical image obtained through a 3D anatomical scan such as by a CT scanner or X-ray machine, ([00356], [00134], and [00220]) which is a different modality than a second image that can be obtained by a camera in an optical tracking system that operates based on light and which also can utilize triangulation methods such as stereo-photogrammetry).
Regarding claim 8, the combination of Crawford and Maurer teaches the limitations of claim 6. Crawford also teaches wherein the first image comprises a three-dimensional representation of the one or more surgical landmarks, wherein the first imaging device obtains images using ionizing radiation ([00355], An array of trackers and a surveillance marker is attached to a patient and a medical image is taken with the trackers in place, ([00177]) where the medical image may include a 3D anatomical scan, such as a X-ray, which can use ionizing radiation).
Regarding claim 12, the combination of Crawford and Maurer teaches the limitations of claim 1. Crawford also teaches, wherein the threshold is based on a predicted position of the at least one surgical landmark after a surgical step is performed ([00357] and Figure 36, The surgical robot system may have an additional array of trackers on a robot arm, where if the predicted movement based on encoders counts and tracked 3D position of the array of trackers differ after an operation of known counts by more than a predetermined threshold, than an agent is alerted of an operational issue or malfunction).
Regarding claim 13, the combination of Crawford and Maurer teaches the limitations of claim 1. Crawford also teaches, wherein the second image is received in real-time ([00356], Frames of real-time data during a procedure are received by the surgical robotic system that includes the positions of the array of trackers and surveillance marker, ([00220]) where the frames may include imaging data acquired by one or more cameras).
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Crawford in view of Maurer, and in further view of Sarvestani et al. (US PGPub 20240315778) (hereinafter “Sarvestani”).
Regarding claim 9, Crawford and Maurer teaches the system as recited in claim 8. Crawford and Maurer does not specifically teach, however Sarvestani teaches wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: update at least a portion of the three-dimensional representation based on the second image ([0090]-[0094] and Figures 1 and 6, A surgical assistance system can use an intracorporeal image of an endoscope that is correlated with 3D image data from, for example, a CT image of the patient, such that three landmarks are detected for each image, and the preoperative 3D image is transformed based on the endoscope location, which is outputted on a display; ([0022]) The surgical assistance system can include a control unit comprising a processor for processing and a storage unit that can provide 3D image data).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that at least a portion of the three-dimensional representation is updated based on the second image, as conceptually seen from the teaching of Sarvestani, into that of Crawford and Maurer. Motivation to do so would have been to provide a safer way to access the surgical area during an operation (Sarvestani, [0008]).
Regarding claim 10, Crawford and Maurer teaches the system as recited in claim 9. Crawford and Maurer does not specifically teach, however Sarvestani teaches wherein the second image comprises a two-dimensional representation of the one or more surgical landmarks ([0093] and [0068], The intracorporeal image where three landmarks are detected may be two-dimensional).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the second image comprises a two-dimensional representation of the one or more surgical landmarks, as conceptually seen from the teaching of Sarvestani, into that of Crawford and Maurer. Motivation to do so would have been to make it easier for a user to recognize the contents of the image when it is put on a display.
Regarding claim 11, Crawford and Maurer teaches the system as recited in claim 8. Crawford and Maurer does not specifically teach, however Sarvestani teaches wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: update the three-dimensional representation of the at least one surgical landmark based on the detected movement of the at least one surgical landmark ([0095]-[0098] and Figures 1 and 6, The surgical assistance system can further include detection of movement of the endoscope, where the 3D image is then updated with a new virtual position and orientation of the endoscope, which results in the display of the 3D image being updated and may also cause an updated detection of image landmarks; ([0075]) Detected movement of the endoscope can be determined based on image analysis determining movement of anatomical structures; ([0022]) The surgical assistance system can include a control unit comprising a processor for processing and a storage unit that can provide 3D image data).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the update the three-dimensional representation of the at least one surgical landmark is updated based on the detected movement of the surgical landmark, as conceptually seen from the teaching of Sarvestani, into that of Crawford and Maurer. Motivation to do so would have been to determine the magnitude of movement with increased accuracy, which further allows the updated image display to provide a safer way to access the surgical area during an operation (Sarvestani, [0008] and [0075]).
Claims 14-15, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Crawford in view of Sarvestani, and in further view of Manni et al. “Multi-view 3D skin feature recognition and localization for patient tracking in spinal surgery applications” hereinafter (“Manni”).
Regarding claim 14, Crawford teaches a device for tracking an anatomical element comprising: a robotic system including at least one robotic arm configured to position a first imaging device and a second imaging device at one or more positions and orientations for capturing a first image and a second image of a surgical site ([00505] “In some embodiments, the robot system 1 includes at least one mounted camera. For example, FIG. 81 illustrates a perspective view of a robot system including a camera arm in accordance with one embodiment of the invention. In some embodiments, to overcome issues with line of sight, it is possible to mount cameras for tracking the patient 18 and robot 15 on an arm 8210 extending from the robot … Further, in some embodiments, the joints 8210a, 8210b can be used to sense the current position of the cameras (i.e. the position of the camera arm 8200)”, where Figure 81 shows an example position and orientation of the cameras and [00220] the cameras acquire imaging data; [00492] Additionally the camera arm includes multiple cameras, thus having a first and second imaging device; ([00134], The enclosed surgical robot system may include an optical tracking system comprised of cameras; [00445], the imaging device is position to allow the surgeon access to the site and so that the robot can see all the markings around the site and the possible trajectories of the surgical path);
a processor ([00205] and Figure 34, The surgical robot system can include one or more processors);
and a memory storing data for processing by the processor, the data, when processed, causing the processor to: ([00205] and Figure 34, The system can further include a memory on a system bus that couples the memory and processor, ([00220]) where the memory can receive image data from cameras);
receive the first image from the first imaging device depicting one or more surgical landmarks ([00355] and Figure 36, The surgical robot system can utilize an array of trackers and a surveillance marker that are attached to a patient and where a medical image is taken with the trackers in place; ([00177]) the medical image may include a CT scan or X-ray image; [00356] Further, “In some embodiments, in response to a user intentionally shifting the tracker array 3610 or the surveillance marker 710 to a new position, execution of the control software application can permit overwriting a set of one or more stored distances with new values for comparison to subsequent frames”, [00220] where the frames may include imaging data acquired by one or more cameras, thus the camera can also be used as a first imaging device to acquire a first image of one or more surgical landmarks);
control the robotic arm to continuously reposition the second imaging device to capture the second image of a plurality of images depicting the one or more surgical landmarks at an expected position based on the first image, the second image captured after the first image ([00356], Frames of real-time data during a procedure are received by the surgical robotic system that includes the positions of the array of trackers and surveillance marker, ([00220]) where the frames may include imaging data acquired by one or more cameras and ([00344]) a first medical image can occur before the procedure; 00144], the robot is actuated; [00177], a trajectory based on the images is captured; [00417] the images include the landmarks);
detect movement of at least one surgical landmark of the one or more surgical landmarks based on the first image and the second image captured at the expected position([00356] and Figure 36, The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example 3D distances stored in memory from when the original distance between the array of trackers and the surveillance marker was captured. This comparison may cause a detection of a shift in the array of trackers or the surveillance marker and for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan),
Crawford does not explicitly teach wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements,
Maurer teaches wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements, (Column 2 Lines 1-26, markers are disposed on the skin and they are also implanted under the skin on the bones)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Crawford with Maurer as the references deal performing surgery, in order to implement a system that provides markers on anatomical elements and implanted in them. Maurer would modify Crawford by providing markers on anatomical elements and implanted in them. The benefit of doing so is these points and surfaces can be easily and accurately acquired in physical space using 3-D probes (e.g., articulated mechanical, electromagnetic, ultrasonic, optical), stereo video cameras, and/or laser range-finders. (Maurer Column 2 Lines 22-26)
Crawford and Maurer does not specifically teach, however Sarvestani teaches and update the first image based on the detected movement ([0095]-[0098] and Figures 1 and 6, The surgical assistance system can include detection of movement of the endoscope, where the 3D image is then updated with a new virtual position and orientation of the endoscope, which results in the display of the 3D image being updated and may also cause an updated detection of image landmarks; ([0075]) Detected movement of the endoscope can be determined based on image analysis determining movement of anatomical structures).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the first image is updated based on the detected movement, as conceptually seen from the teaching of Sarvestani, into that of Crawford and Maurer. Motivation to do so would have been to update the image display based on an accurately determined movement which provides a safer way to access the surgical area during an operation (Sarvestani, [0008] and [0075]).
Crawford, Maurer and Sarvestani do not specifically teach, however Manni teaches wherein detect movement of the at least one surgical landmark comprises using feature recognition on the first image and the second image (Page 1, Abstract, “this study proposes a new marker-free tracking framework based on skin feature recognition” (Page 2, Background), which involves motion tracking natural features on the skin using image analysis; (Page 3, Background) Further, “The well-known feature detection algorithms, Maximally Stable Extremal Regions (MSER), and Speeded Up Robust Features (SURF)” are (Pages 4 and 6, Results) used on an image pair as shown in Figure 1, thus a first and second image).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the detection of movement of the surgical landmark comprises using feature recognition on the first image and second image, as conceptually seen from the teaching of Manni, into that of Crawford, Maurer and Sarvestani. Motivation to do so would have been to improve the reliability of tracking during a surgical procedure as well as eliminate the risk of losing sight of the tracking markers (Page 3, Background).
Regarding claim 15, Crawford, Maurer, Sarvestani and Manni teaches the device as recited in claim 14, Crawford also teaches wherein detecting the movement of the at least one surgical landmark comprises comparing a position of the at least one surgical landmark in the first image with a position of the at least one surgical landmark in the second image ([00356], The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example distances stored in memory from when the surveillance marker location is originally set, where a shift in the array of trackers or surveillance marker can be detected that for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan).
Regarding claim 17, Crawford, Maurer, Sarvestani and Manni teaches the device as recited in claim 14, Crawford also teaches wherein the first image is at least one of a two-dimensional or three-dimensional representation of the one or more surgical landmarks and the second image is at least one of a two-dimensional or three-dimensional representation of the one or more surgical landmarks, wherein the first image is obtained from a first imaging device using a first imaging modality and the second image is obtained from a second imaging device using a second imaging modality ([00355], An array of trackers and a surveillance marker is attached to a patient and a medical image is taken with the trackers in place, ([00177]) where the medical image may be obtained through a 3D anatomical scan such as by a CT scanner or X-ray machine; [00356], Frames of real-time data during a procedure are received by the surgical robotic system that includes the positions of the array of trackers and surveillance marker, where 3D vector distances are calculated using the frames of data and ([00220]) the frames may include imaging data acquired by one or more cameras. ([00134] and [00220]) Further, the cameras used to acquire imaging data in an optical tracking system can operate based on light and the system can also use triangulation methods such as stereo-photogrammetry).
Regarding claim 18, Crawford, Maurer, Sarvestani and Manni teaches the device as recited in claim 17, Crawford also teaches, wherein the first imaging modality uses ionizing radiation and the second imaging modality is free of ionizing radiation ([00355] and [00177], A medical image can be obtained through a 3D anatomical scan such as an X-ray, which can use ionizing radiation, ([00134]) Cameras used in an optical tracking system that operate based on light can exclude ionizing radiation).
Regarding claim 19, Crawford, Maurer, Sarvestani and Manni teaches the device as recited in claim 14. Crawford and Maurer does not specifically teach, however Sarvestani teaches wherein updating the first image occurs when the detected movement meets or exceeds a movement threshold ([0095]-[0100] and Figures 1 and 6, The surgical assistance system can include detection of movement of the endoscope, where the 3D image is then updated with a new virtual position and orientation of the endoscope. This results in the display of the 3D image being updated and additionally may cause a re-registration of image landmarks, where the 3D image is further updated; ([0041]) The re-registration of the landmarks can be performed based on a predetermined movement distance or a sum of movements).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the first image is updated when the detected movement meets or exceeds a movement threshold, as conceptually seen from the teaching of Sarvestani, into that of Crawford and Maurer. Motivation to do so would have been to further increase the accuracy of the correlation between the three-dimensional image and the intracorporeal image (Sarvestani, [0041]).
Claim 20-22 is rejected under 35 U.S.C. 103 as being unpatentable over Crawford in view of Maurer, and in further view of Crawford et al. (US PGPub 20190021795) (hereinafter “Crawford II”).
Regarding claim 20, Crawford teaches a system for tracking one or more surgical landmarks comprising: a first imaging device using a first imaging modality ([00177], The enclosed surgical robot can use a medical image obtained through a 3D anatomical scan such as by a CT scanner or X-ray machine);
a second imaging device using a second imaging modality ([00134] and [00220], The surgical robot system may obtain a second image from a camera in an optical tracking system that operates based on light and which also can use triangulation methods such as stereo-photogrammetry);
a robotic system including at least one robotic arm configured to position the first imaging device and the second imaging device at one or more positions and orientations for capturing images ([00505] “In some embodiments, the robot system 1 includes at least one mounted camera. For example, FIG. 81 illustrates a perspective view of a robot system including a camera arm in accordance with one embodiment of the invention. In some embodiments, to overcome issues with line of sight, it is possible to mount cameras for tracking the patient 18 and robot 15 on an arm 8210 extending from the robot … Further, in some embodiments, the joints 8210a, 8210b can be used to sense the current position of the cameras (i.e. the position of the camera arm 8200)”, where Figure 81 shows an example position and orientation of the cameras and [00220] the cameras acquire imaging data; [00492] Additionally the camera arm includes multiple cameras, thus having a first and second imaging device);
a processor ([00205] and Figure 34, The system can include one or more processors);
a memory storing data for processing by the processor, the data, when processed, causing the processor to ([00205] and Figure 34, The system can further include a memory on a system bus that couples the memory and processor, ([00220]) where the memory can receive image data from cameras):
receive a first image from the first imaging device, the first image depicting one or more surgical landmarks ([00355] and Figure 36, The surgical robot system can utilize an array of trackers and a surveillance marker that are attached to a patient and where a medical image is taken with the trackers in place; ([00177]) the medical image may include a CT scan or X-ray image; [00356] Further, “In some embodiments, in response to a user intentionally shifting the tracker array 3610 or the surveillance marker 710 to a new position, execution of the control software application can permit overwriting a set of one or more stored distances with new values for comparison to subsequent frames”, [00220] where the frames may include imaging data acquired by one or more cameras, thus the camera can also be used as a first imaging device to acquire a first image of one or more surgical landmarks);
control the robotic arm to continuously reposition the second imaging device to capture a second image of a plurality of second images from the second imaging device at an expected position based on the first image, the second image depicting the one or more surgical landmarks ([00356], Frames of real-time data during a procedure are received by the surgical robotic system that includes the positions of the array of trackers and surveillance marker, ([00220]) where the frames may include imaging data acquired by one or more cameras ([00344]) a first medical image can occur before the procedure; [00144], the robot is actuated; [00177], a trajectory based on the images is captured; [00417] the images include the landmarks);
detect movement of at least one surgical landmark of the one or more surgical landmarks based on the first image and the second image ([00356] and Figure 36, The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example 3D distances stored in memory from when the original distance between the array of trackers and the surveillance marker was captured. This comparison may cause a detection of a shift in the array of trackers or the surveillance marker and for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan);
and generate a notification when the detected movement meets a threshold ([00356], If there is a shift in the array of trackers or the surveillance marker, or if the surveillance marker offset exceeds a pre-set amount, a notification can be issued to alert an agent of a loss in movement accuracy or the surgical robot system can be halted).
Crawford does not explicitly teach wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements,
Maurer teaches wherein the one or more surgical landmarks comprises a reference marker disposed on corresponding anatomical elements, one or more implants implanted on the corresponding anatomical elements, (Column 2 Lines 1-26, markers are disposed on the skin and they are also implanted under the skin on the bones)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Crawford with Maurer as the references deal performing surgery, in order to implement a system that provides markers on anatomical elements and implanted in them. Maurer would modify Crawford by providing markers on anatomical elements and implanted in them. The benefit of doing so is these points and surfaces can be easily and accurately acquired in physical space using 3-D probes (e.g., articulated mechanical, electromagnetic, ultrasonic, optical), stereo video cameras, and/or laser range-finders. (Maurer Column 2 Lines 22-26)
Crawford and Maurer does not specifically teach, however Crawford II teaches wherein the notification specifies which surgical landmark of the at least one surgical landmark of the one or more surgical landmarks has moved ([0354] “The surveillance marker error gauge 762 indicates the distance that the patient reference has moved in relation to the surveillance marker”, where [0374] “The system alerts the operator of errors through pop-up messages”, which includes “Surveillance Marker Moved” indicating that “the surveillance marker has moved beyond its safety-critical limit in relation to the Dynamic Reference Base”, thus indicating which surgical landmark has moved).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to add that the notification specifies which surgical landmark has moved, as conceptually seen from the teaching of Crawford II, into that of Crawford and Maurer. Motivation to do so would have been to allow the user to more easily remedy a situation of inaccurate navigation by identifying the cause.
Regarding claim 21, Crawford, Maurer and Crawford II teaches the system as recited in claim 20, Crawford also teaches wherein detecting the movement of the at least one surgical landmark comprises comparing a position of the at least one surgical landmark in the first image to the position of the at least one surgical landmark in the second image. ([00356], The distances between the array of trackers and the surveillance marker are updated in real-time and compared to for example distances stored in memory from when the surveillance marker location is originally set, where a shift in the array of trackers or surveillance marker can be detected that for example may cause a loss of movement accuracy; [00355] An example loss of movement accuracy can be due to the array of trackers moving position from where it was during a medical image scan; [00523]-[00525] movement is seen in ultrasound images)
Regarding claim 22, Crawford, Maurer and Crawford II teaches the system as recited in claim 20, Crawford also teaches wherein the threshold is based on a predicted position of the at least one surgical landmark after a surgical step is performed. ([00357] and Figure 36, The surgical robot system may have an additional array of trackers on a robot arm, where if the predicted movement based on encoders counts and tracked 3D position of the array of trackers differ after an operation of known counts by more than a predetermined threshold, than an agent is alerted of an operational issue or malfunction).
Allowable Subject Matter
The 101 rejection of claims 1-2, 4-15, and 17-22 is withdrawn based on the amendments to filed 10/27/2025. The limitations include the controlling of the robotic arm to continually capture images and detect movement based on the capturing of images by the moving robotic arm, in combination with the all of the remaining limitations. This falls under the streamlined analysis of MPEP 2106.06 where a robotic arm with a control system is determined as eligible subject matter. The claims would be allowable if rewritten to overcome the 103 rejection of the claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Labadie et al. USPPN 2005/0228256: Also teaches the use of landmarks attached to the skin or implanted to be used in image guided position feedback.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL COCCHI whose telephone number is (469)295-9079. The examiner can normally be reached 7:15 am - 5:15 pm CT Monday - Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at 571-272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL EDWARD COCCHI/Primary Examiner, Art Unit 2188