DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 12 recites the limitation “a 3D image of the subject” in line 6 renders the claim unclear. It is unclear whether this 3D image is the same as the 3D image obtained earlier in the claim in line 2.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-10, 12-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lang (US20210267691A1).
Regarding Claim 1, Lang discloses a method (“Devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display are disclosed.
Images (42)” [Abstract]) comprising:
positioning, by one or more processors, a 3D image of a subject relative to a frame of reference corresponding to a medical image of the subject (“the optical head mount display uses a computer graphics viewing pipeline that consists of the following steps to display 3D objects or 2D objects positioned in 3D space or other computer generated objects and models “ [0108], “When images of the patient are superimposed onto live data seen through the optical head mounted display, in many embodiments image segmentation can be desirable… The pre-operative 40 or intra-operative 41 imaging study can be segmented 42, extracting, for example, surfaces, volumes or key features. An optional 3D reconstruction or 3D rendering 43 can be generated. The pre-operative 40 or intra-operative 41 imaging study and any 3D reconstruction or 3D rendering 43 can be registered in a common coordinate system 44. The pre-operative 40 or intra-operative 41 imaging study and any 3D reconstruction or 3D rendering 43 can be used for generating a virtual surgical plan 45. “ [0194], “the method comprises…, displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display. The virtual surgical plan can be displayed or projected onto the patient's physical joint based at least in part on coordinates of the predetermined position of the virtual surgical plan.” [1449], Lang uses medical images to generate 3D images of a subject relative to the surgical space (frame of reference) and superimposes the generated images onto the surgical space to guide a procedure);
receiving, by the one or more processors, tracking data of a surgical instrument (“obtaining one or more intra-operative optical measurements using one or more optical markers, registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system” [1449], “One or more 2D geometric patterns, alphabetic, numeric, alphanumeric, and other codes or patterns including bar codes and QR codes, optionally with color and/or black and white coding, included in, affixed to or integrated into an optical marker can be used to determine the orientation and/or alignment of the optical marker, which can, for example, be affixed to or integrated into an anatomic landmark, a surgical site, a surgical alternation, e.g. a cut bone surface or a reamed bone surface, a surgical instrument and/or one or more implant components including trial implants.” [0401]);
determining, by the one or more processors based on the tracking data, a location of the surgical instrument relative to a location of interest within the frame of reference (“One or more 2D geometric patterns, alphabetic, numeric, alphanumeric, and other codes or patterns including bar codes and QR codes, optionally with color and/or black and white coding, included in, affixed to or integrated into an optical marker can be used to determine the orientation and/or alignment of the optical marker, which can, for example, be affixed to or integrated into an anatomic landmark, a surgical site, a surgical alternation, e.g. a cut bone surface or a reamed bone surface, a surgical instrument and/or one or more implant components including trial implants.” [0401]);
controlling, by the one or more processors, the surgical instrument to perform a procedure at the location of interest (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument. A backward movement of the head can be captured by the IMU and can be translated into a backward movement of the robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148]);
evaluating, by the one or more processors, a parameter of the procedure based on a threshold for the procedure (“If there are differences between the physical change in the physical surgically altered tissue and the virtually intended change in the virtually surgically altered tissue or if there are differences in the appearance, properties and/or characteristics of the physical surgically altered tissue and the virtually altered tissue, e.g. in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options.” [0719]);
and causing, by the one or more processors, the surgical instrument to terminate the procedure responsive to the parameter satisfying the threshold (“A binary, e.g. yes, no, system can be used for triggering an alert that the image and/or video capture system and/or the OHMD display are operating outside a clinically acceptable performance range, e.g. exceeding certain view angles, exceeding or being below certain distances to the target anatomy, or exceeding an acceptable movement speed.” [0440], Table 3 in [0147], describes turning off the surgical instrument as a command executed by motion tracking.)
Regarding Claim 2, Lang discloses further comprising controlling, by the one or more processors, a position of the surgical instrument based on the tracking data and at least one of a target movement of the surgical instrument or a target distance between the location of the surgical instrument and the location of interest (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument. A backward movement of the head can be captured by the IMU and can be translated into a backward movement of the robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148], “the surgical instruments displayed in the virtual data can be representative of the physical surgical instruments used in the live patient and can have the same projected dimensions and shape as the physical surgical instruments. As indicated in Table 11, the virtual view of the virtual surgical instrument or instruments can, for example, indicate the predetermined position, location, rotation, orientation, alignment, direction of a surgical instrument. When the physical surgical instrument is aligned with and/or superimposed onto the virtual representation of the virtual surgical instrument, the surgical step can optionally be executed or the surgeon can elect to make adjustments to the position, location, rotation, orientation, alignment, direction of a physical surgical instrument relative to the virtual surgical instrument,” [1354]).
Regarding Claim 3, Lang discloses the location of interest is on a surface of a head of the subject. (“Similarly, in other surgical procedures, e.g. knee replacement, hip replacement, shoulder replacement, ACL repair and reconstruction, cranial, maxillofacial and brain surgery, the physical position of any drill, pin, instrument, implant, device or device component can be determined using any of the techniques described in the specification and any deviations or differences between the physical and the intended virtual placement/position/and/or orientation can be determined.” [0792])
Regarding Claim 4, Lang discloses further comprising: transforming, by the one or more processors, the tracking data of the surgical instrument relative to the frame of reference to generate transformed tracking data; and rendering, by one or more processors, the transformed tracking data within a render of the medical image and the 3D image (“The optical head mount display uses a computer graphics viewing pipeline that consists of the following steps to display 3D objects or 2D objects positioned in 3D space or other computer generated objects and models FIG. 16B: 1. Registration 2. View projection” [0108], “the method comprises registering the patient's surgical site and one or more optical head mounted display worn by a surgeon or surgical assistant in a common coordinate system, obtaining one or more intra-operative optical measurements using one or more optical markers, registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system, developing a virtual surgical plan based on the one or more intra-operative optical measurements, and displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display. “ [1449].
Regarding Claim 5, Lang discloses further comprising: generating, by the one or more processors, movement instructions for the surgical instrument based on the medical image and the location of interest; and transmitting, by the one or more processors, the movement instructions to the surgical instrument (“registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system, developing a virtual surgical plan based on the one or more intra-operative optical measurements, and displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display” [1449], “the virtual surgical plan includes a virtual surgical instrument displayed or projected in a desired or predetermined position, orientation, alignment and/or direction of movement.” [1466]).
Regarding Claim 6, Lang discloses further comprising displaying a highlighted region for the location of interest within a render of the medical image (“The surgeon can also mark sensitive tissue, e.g. nerves, brain structure, vessels etc., that the surgeon wants to preserve or protect during the surgery. Such sensitive structure(s) can be highlighted, for example using different colors, when the virtual surgical plan and the related anatomic data or pathologic tissue information is being transmitted to or displayed by the OHMD.” [0926]).
Regarding Claim 7, Lang discloses further comprising determining, by the one or more processors, a distance of the subject represented in the medical image from an image capture device to detect the 3D image (“the OHMD can transmit data back to a computer, a server or a workstation. Such data can include, but are not limited to: Distance data, e.g. parallax data generated by two or more image and/or video capture systems evaluating changes in distance between the OHMD and a surgical field or an object” [0081,0088]).
Regarding Claim 8, Lang discloses further comprising causing, by the one or more processors, the surgical instrument to terminate energy emission responsive to (2) movement of the subject exceeding a movement threshold (“If there are differences between the physical change in the physical surgically altered tissue and the virtually intended change in the virtually surgically altered tissue or if there are differences in the appearance, properties and/or characteristics of the physical surgically altered tissue and the virtually altered tissue, e.g. in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options.” [0719], Table 3 discloes adjusting “intensity, speed, energy deposed of surgical instrument” in [0147]..
Regarding Claim 10, Lang discloses further comprising: applying, by the one or more processors using a robotic arm coupled with the surgical instrument, a force to keep the surgical instrument in contact with a surface of the subject; and adjusting, by the one or more processors, the applied force based on the tracking data (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148]).
Regarding Claim 12, Lang discloses a system (“Devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display are disclosed.
Images (42)” [Abstract]) comprising:
a 3D camera configured to detect a 3D image of a subject (“The pre-operative 40 or intra-operative 41 imaging study can be segmented 42, extracting, for example, surfaces, volumes or key features. An optional 3D reconstruction or 3D rendering 43 can be generated. The pre-operative 40 or intra-operative 41 imaging study and any 3D reconstruction or 3D rendering 43 can be registered in a common coordinate system 44… the OHMD 48 is configured to use a built in camera or image capture or video capture system 50 to optionally detect and/or measure the position and/or orientation and/or alignment of one or more optical markers 51, which can be used for the coordinate measurements 52, which can be part of the intra-operative measurements 47.” [0194];
a surgical instrument configured to apply a procedure to a location of interest on the subject (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument. A backward movement of the head can be captured by the IMU and can be translated into a backward movement of the robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148]);
and one or more processors configured to: position a 3D image of a subject relative to a frame of reference corresponding to a medical image of the subject (“the optical head mount display uses a computer graphics viewing pipeline that consists of the following steps to display 3D objects or 2D objects positioned in 3D space or other computer generated objects and models “ [0108], “When images of the patient are superimposed onto live data seen through the optical head mounted display, in many embodiments image segmentation can be desirable… The pre-operative 40 or intra-operative 41 imaging study can be segmented 42, extracting, for example, surfaces, volumes or key features. An optional 3D reconstruction or 3D rendering 43 can be generated. The pre-operative 40 or intra-operative 41 imaging study and any 3D reconstruction or 3D rendering 43 can be registered in a common coordinate system 44. The pre-operative 40 or intra-operative 41 imaging study and any 3D reconstruction or 3D rendering 43 can be used for generating a virtual surgical plan 45. “ [0194], “the method comprises…, displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display. The virtual surgical plan can be displayed or projected onto the patient's physical joint based at least in part on coordinates of the predetermined position of the virtual surgical plan.” [1449], Lang uses medical images to generate 3D images of a subject relative to the surgical space (frame of reference) and superimposes the generated images onto the surgical space to guide a procedure);
receive tracking data of a surgical instrument (“obtaining one or more intra-operative optical measurements using one or more optical markers, registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system” [1449], “One or more 2D geometric patterns, alphabetic, numeric, alphanumeric, and other codes or patterns including bar codes and QR codes, optionally with color and/or black and white coding, included in, affixed to or integrated into an optical marker can be used to determine the orientation and/or alignment of the optical marker, which can, for example, be affixed to or integrated into an anatomic landmark, a surgical site, a surgical alternation, e.g. a cut bone surface or a reamed bone surface, a surgical instrument and/or one or more implant components including trial implants.” [0401]);
determine, based on the tracking data, a location of the surgical instrument relative to a location of interest within the frame of reference (“One or more 2D geometric patterns, alphabetic, numeric, alphanumeric, and other codes or patterns including bar codes and QR codes, optionally with color and/or black and white coding, included in, affixed to or integrated into an optical marker can be used to determine the orientation and/or alignment of the optical marker, which can, for example, be affixed to or integrated into an anatomic landmark, a surgical site, a surgical alternation, e.g. a cut bone surface or a reamed bone surface, a surgical instrument and/or one or more implant components including trial implants.” [0401]);
control the surgical instrument to perform a procedure at the location of interest (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument. A backward movement of the head can be captured by the IMU and can be translated into a backward movement of the robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148]);
evaluate a parameter of the procedure based on a threshold for the procedure (“If there are differences between the physical change in the physical surgically altered tissue and the virtually intended change in the virtually surgically altered tissue or if there are differences in the appearance, properties and/or characteristics of the physical surgically altered tissue and the virtually altered tissue, e.g. in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options.” [0719]);
and cause the surgical instrument to terminate the procedure responsive to the parameter satisfying the threshold (“A binary, e.g. yes, no, system can be used for triggering an alert that the image and/or video capture system and/or the OHMD display are operating outside a clinically acceptable performance range, e.g. exceeding certain view angles, exceeding or being below certain distances to the target anatomy, or exceeding an acceptable movement speed.” [0440], Table 3 in [0147], describes turning off the surgical instrument as a command executed by motion tracking.)
Regarding Claim 13, Lang discloses the one or more processors are further configured to control the position of the surgical instrument based on the tracking data and at least one of a target movement of the surgical instrument or a target distance between the location of the surgical instrument and the location of interest (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument. A backward movement of the head can be captured by the IMU and can be translated into a backward movement of the robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148], “the surgical instruments displayed in the virtual data can be representative of the physical surgical instruments used in the live patient and can have the same projected dimensions and shape as the physical surgical instruments. As indicated in Table 11, the virtual view of the virtual surgical instrument or instruments can, for example, indicate the predetermined position, location, rotation, orientation, alignment, direction of a surgical instrument. When the physical surgical instrument is aligned with and/or superimposed onto the virtual representation of the virtual surgical instrument, the surgical step can optionally be executed or the surgeon can elect to make adjustments to the position, location, rotation, orientation, alignment, direction of a physical surgical instrument relative to the virtual surgical instrument,” [1354]).
Regarding Claim 14, Lang discloses the one or more processors are further configured to: transform the tracking data of the surgical instrument relative to the frame of reference to generate transformed tracking data; and render the transformed tracking data within a render of the medical image and the 3D image (“he optical head mount display uses a computer graphics viewing pipeline that consists of the following steps to display 3D objects or 2D objects positioned in 3D space or other computer generated objects and models FIG. 16B: 1. Registration 2. View projection” [0108], “the method comprises registering the patient's surgical site and one or more optical head mounted display worn by a surgeon or surgical assistant in a common coordinate system, obtaining one or more intra-operative optical measurements using one or more optical markers, registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system, developing a virtual surgical plan based on the one or more intra-operative optical measurements, and displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display. “ [1449].
Regarding Claim 15, Lang discloses the one or more processors are further configured to: generate movement instructions for the surgical instrument based on the medical image and the location of interest; and transmit the movement instructions to the surgical instrument (“registering the one or more intra-operative optical measurements using one or more optical markers in the common coordinate system, developing a virtual surgical plan based on the one or more intra-operative optical measurements, and displaying or projecting aspects of the virtual surgical plan superimposed onto the corresponding portions of the patient's physical joint with the optical head mounted display” [1449], “the virtual surgical plan includes a virtual surgical instrument displayed or projected in a desired or predetermined position, orientation, alignment and/or direction of movement.” [1466]).
Regarding Claim 16, Lang discloses the one or more processors are further configured to generate a highlighted region for the location of interest within a render of the medical image (“The surgeon can also mark sensitive tissue, e.g. nerves, brain structure, vessels etc., that the surgeon wants to preserve or protect during the surgery. Such sensitive structure(s) can be highlighted, for example using different colors, when the virtual surgical plan and the related anatomic data or pathologic tissue information is being transmitted to or displayed by the OHMD.” [0926]).
Regarding Claim 17, Lang discloses the one or more processors are further configured to determine a distance of the subject represented in the medical image from an image capture device to detect the 3D image (“the OHMD can transmit data back to a computer, a server or a workstation. Such data can include, but are not limited to: Distance data, e.g. parallax data generated by two or more image and/or video capture systems evaluating changes in distance between the OHMD and a surgical field or an object” [0081,0088]).
Regarding Claim 18, Lang discloses the one or more processors are further configured to cause the surgical instrument to terminate energy emission responsive to (2) movement of the subject exceeding a movement threshold (“If there are differences between the physical change in the physical surgically altered tissue and the virtually intended change in the virtually surgically altered tissue or if there are differences in the appearance, properties and/or characteristics of the physical surgically altered tissue and the virtually altered tissue, e.g. in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options.” [0719], Table 3 discloes adjusting “intensity, speed, energy deposed of surgical instrument” in [0147]).
Regarding Claim 20, Lang discloses the one or more processors are further configured to apply, using a robotic arm coupled with the surgical instrument, a force to keep the surgical instrument in contact with a surface of the subject; and adjust the applied force based on the tracking data (“in a robot assisted procedure with haptic feedback from the robot, the surgeon can use his or her hands in controlling the direction of a surgical instrument. The surgeon can move the head forward. This forward motion is captured by an IMU and translated into a forward movement of a robotic arm holding a surgical instrument along the direction of the surgical instrument.” [0148]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lang in view of Schein et al (US20210100526A1; hereinafter referred to as Schein)
Regarding Claim 11, Lang discloses the surgical instrument is an ultrasound probe (“an ultrasound probe can be introduced through one or more of the portals and the ultrasound probe can be used for intra-operative imaging, e.g. in addition to x-ray imaging. The ultrasound probe can be used to identify, for example, the ACL origin, ACL insertion and/or any proximal or distal ACL remnants. The ultrasound probe can include or carry one or more IMUS or one or more optical or navigation markers including infrared markers, retroreflective markers, RF markers which can be registered with use of an image and/or video capture system integrated into, attached to or separate from the OHMD.” [1636]);
Lang does not specifcaily disclose the surgical instrument is configured to perform the procedure as a focused ultrasound procedure, the method further comprising steering, by the one or more processors, an ultrasound beam outputted by the surgical instrument based on the tracking data ( “)
However; in a similar field of endeavor, Schein teaches methods and systems are provided for tracking anatomical features across multiple images [Abstract].
Schein also teaches the surgical instrument is configured to perform the procedure as a focused ultrasound procedure, the method further comprising steering, by the one or more processors, an ultrasound beam outputted by the surgical instrument based on the tracking data (“At 526, method 500 optionally includes adjusting one or more ultrasound imaging parameters based on the tracked anatomical features. For example, one or more of ultrasound transducer frequency, imaging depth, image gain, beam steering, and/or other imaging parameters may be automatically adjusted in order to maintain a tracked anatomical feature in view, to maintain a desired imaging plane of a tracked anatomical feature in view, etc. For example, if an anatomical feature of interest is moving up or down, the focus of the ultrasound probe may be adjusted to follow the anatomical feature. Additionally, frequency may be optimized to depth (e.g., higher frequency for shallower images).” [0093])
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Lang as outlined above with the surgical instrument is configured to perform the procedure as a focused ultrasound procedure, the method further comprising steering, by the one or more processors, an ultrasound beam outputted by the surgical instrument based on the tracking data as taught by Schein, because it can maintain a desired imaging plane of a tracked anatomical feature in view.
Claim 9 & 19 is rejected under 35 U.S.C. 103 as being unpatentable over Lang in view of McKinnon et al (US20220079678A1; hereinafter referred to as McKinnon)
Regarding Claim 9, Lang discloses further comprising: receiving, by the one or more processors, an indication of data associated between the surgical instrument and the subject; and controlling, by the one or more processors, operation of the surgical instrument further based on the data (“in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options. The process and the options are also shown in illustrative form in FIG. 6: The surgeon can perform a surgical step 80.” [719], “Modify the Last Surgical Step… This option can, for example, be chosen if the operator or surgeon is of the opinion that the last surgical step was subject to an inaccuracy, e.g. by a fluttering or deviating saw blade or a misaligned pin or a misaligned reamer or impactor or other problem, and should correct the inaccuracy. “ [0722]).
Lang does not specifically teach the data being torque data associated with contact between the surgical instrument and the subject.
However, in a similar field of endeavor, McKinnon teaches a method for creating a patient-specific surgical plan [Abstract].
McKinnon also teaches that the data is torque data associated with contact between the surgical instrument and the subject (“a GUI that provides a visual depiction of the knee during tissue resection may provide the measured torque and displacement of the resection equipment adjacent to the visual depiction to better provide an understanding of any deviations that occurred from the planned resection area.” [0218])
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Lang as outlined above with the data being torque data associated with contact between the surgical instrument and the subject as taught by McKinnon, because it would better provide an understanding of any deviations that occurred from the planned resection area [0218].
Regarding Claim 19, Lang discloses further comprising: receiving, by the one or more processors, an indication of data associated between the surgical instrument and the subject; and controlling, by the one or more processors, operation of the surgical instrument further based on the data (“in the virtual data of the patient and/or the virtual surgical plan, the magnitude of the differences can be assessed: If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation, the surgeon or the operator can have several options. The process and the options are also shown in illustrative form in FIG. 6: The surgeon can perform a surgical step 80.” [719], “Modify the Last Surgical Step… This option can, for example, be chosen if the operator or surgeon is of the opinion that the last surgical step was subject to an inaccuracy, e.g. by a fluttering or deviating saw blade or a misaligned pin or a misaligned reamer or impactor or other problem, and should correct the inaccuracy. “ [0722]).
Lang does not specifically teach the data being torque data associated with contact between the surgical instrument and the subject.
However, in a similar field of endeavor, McKinnon teaches a method for creating a patient-specific surgical plan [Abstract].
McKinnon also teaches that the data is torque data associated with contact between the surgical instrument and the subject (“a GUI that provides a visual depiction of the knee during tissue resection may provide the measured torque and displacement of the resection equipment adjacent to the visual depiction to better provide an understanding of any deviations that occurred from the planned resection area.” [0218])
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Lang as outlined above with the data being torque data associated with contact between the surgical instrument and the subject as taught by McKinnon, because it would better provide an understanding of any deviations that occurred from the planned resection area [0218].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure (US 20210369349 A1, US 20220313366 A1, US 20170245944 A1, US 20230038498 A1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN MALDONADO whose telephone number is 703-756-1421. The examiner can normally be reached 8:00 am-4:00 pm PST M-Th Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Steven Maldonado/
Patent Examiner, Art Unit 3797
/CHAO SHENG/Primary Examiner, Art Unit 3797