DETAILED ACTION
Response to Amendment
This is a Final Office Action on the merits in response to communications on 2025/10/30. Claims 1, 5, 12 – 14, 19, 23, and 25 are amended. Claims 2 – 4 , 8, 10, and 11 have been cancelled. Claims 27 – 35 have been added. Claims 1, 5 – 7, 9, 12 – 35 are pending and are addressed below.
Response to Arguments
Applicant’s amendments have overcome Drawing objections.
Applicant’s amendments have overcome Specification objections.
Applicant’s amendments have overcome 112(b) rejections.
Applicant’s amendments have overcome Claim 1 and 12 objections.
The amendments are further addressed in the body of the Final Rejection.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 35 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Applicant claims:
“…in a direction located on second side of the constriction area.”
For examination purposes, the claim is examined as reciting: “…in a direction located on a second side of the constriction area.”
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 13, 14, 15, 19, and 22 are rejected under 35 U.S.C. 102(a)(1) as anticipated by Gregerson, et. al. (US 20200078097 A1), hereinafter referred to as Gregerson.
Regarding Claim 13:
A method for controlling a location of one or more robotic arms in a constrained space, comprising:
receiving tissue contact constraint data;
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” [0080]. The “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” [0080].
controlling the one or more robotic arms in a manner to reduce possible damage to tissue in an area defined by the tissue contact constraint data;
Gregerson discloses “…inserting surgical tool(s) to reach a target position while minimizing damage to other tissue or organs of the patient) (Gregerson, [0038]).
defining a constriction area relative to the tissue based on the tissue contact constraint data, the constriction area defines an area or region in which the robotic arms can operate with minimal risk of damage to the tissue;
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” (Gregerson, [0080]). The “constriction area” in the applicant’s claim is synonymous with the “‘virtual’ three-dimensional surface” (Gregerson, [0080]), and the “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” (Gregerson, [0080]).
defining a depth allowance relative to the constriction area based on the tissue contact constraint data, the depth allowance defines a tissue depth relative to the constriction area in which one or more constraints are applied to limit movement of the robotic arms when one of the robotic arms is between the constriction area and the depth allowance; and
Gregerson discloses “In a further embodiment operating mode of a robotic arm 101, the motion of the robotic arm 101 may be restricted such that the tip end 601 of the end effector 102 may not be moved within a pre-determined off-set distance from a defined target location, TL, inside the body of a patient 200. Put another way, the off-set distance may function as a stop on the motion of the end effector 102 towards the target location, TL. The stop may be located outside the body of the patient such that the end effector may be prevented from contacting the patient. In some embodiments, the robotic arm 101 may be movable (e.g., via hand guiding, autonomously, or both) in a limited range of motion such that the end effector 102 may only be moved along a particular trajectory towards or away from the patient 200 and may not be moved closer than the off-set distance from the target location, TL.” [0073].
halting movement of the robotic arms when one of the robotic arms goes past the depth allowance.
Gregerson discloses “A robotic arm 101 may be controlled such that no portion of the robotic arm 101, such as the end effector 102 and joints 807 of the arm 101, may cross the boundary surface 801” [Gregerson , 0084]. The “boundary surface” in the prior art is the synonymous with “depth allowance” in the application.
Regarding Claim 14:
The method of claim 13, further comprising:
identifying a portion of an anatomical structure with a distal end of one of the robotic arms; and
Gregerson discloses “…the optical sensor device includes a plurality of cameras mounted to an arm extending above the patient surgical area” [0039].
defining the constriction area based on the tissue contact constraint data and the identified portion of an anatomical structure.
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” [0080]. The “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” [0080].
Regarding Claim 15:
The method of claim 14, further comprising displaying, on a display unit, an indicator when one of the robotic arms enters a space beyond the constriction area.
Gregerson discloses “An indicator, which may be provided on a display device or on the robotic arm itself (e.g., an LED indicator), may indicate the position of the robotic arm with respect to the virtual volume” [0098]. The “virtual volume” is defined as “a virtual three-dimensional volume… defined over the surgical area of the patient” [0096]. Gregerson additionally discloses “the dimensions and/or shape of the virtual volume may be adjustable by the user. The user may adjust the dimensions and/or shape of the virtual volume based on the position(s) of potential obstruction(s) that the robotic arm could collide with during surgery”, [0099], specifying that an additional area/boundary can be set that would be synonymous with the “constriction area”.
Regarding Claim 19:
The method of claim 13, further comprising displaying an indicator, on a display unit, when one of the robotic arms reaches the depth allowance.
Gregerson discloses “An indicator, which may be provided on a display device or on the robotic arm itself (e.g., an LED indicator), may indicate the position of the robotic arm with respect to the virtual volume” [0098]. The “virtual volume” is defined as “a virtual three-dimensional volume… defined over the surgical area of the patient” [0096]. Gregerson additionally discloses “the dimensions and/or shape of the virtual volume may be adjustable by the user. The user may adjust the dimensions and/or shape of the virtual volume based on the position(s) of potential obstruction(s) that the robotic arm could collide with during surgery”, [0099], specifying that an additional area/boundary can be set that would be synonymous with the “depth allowance”.
Regarding Claim 22:
The method of claim 13, wherein the constriction area is represented by a plane and controlling the one or more robotic arms in a manner to reduce possible damage to tissue in the area defined by the tissue contact constraint data comprises halting movement of the one or more robotic arms on one side of the plane.
Gregerson discloses a “…boundary surface [that] may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data.” [0080]. The “skin surface ” in the prior art is the synonymous with “floor” in the application. Gregerson also discloses “The boundary surface may correspond to the position of the patient in three-dimensional space and may prevent the robotic arm or any portion thereof, from colliding with the patient” [0076], which also prevents “the one or more robotic arms [from extending] past the depth allowance” (as specified by the applicant).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5 – 7, 9, 12, 23, and 27 – 35 are rejected under 35 U.S.C. 103 as being unpatentable over Gregerson, et. al. (US 20200078097 A1), hereinafter referred to as Gregerson, in view of Funda, et. al. (US 6201984 B1), hereinafter referred to as Funda.
Regarding Claim 1:
A surgical robotic system, comprising:
a robotic unit having robotic arms;
Gregerson discloses “Although a single robotic arm is shown in FIGS. 1 and 2, it will be understood that two or more robotic arms may be utilized” (Gregerson, [0042]).
a camera assembly to generate a view of an anatomical structure of a patient;
Gregerson discloses an “…image dataset of the patient's anatomy may be obtained using an imaging device” (Gregerson, [0044]). The specified “image dataset” from the Gregerson prior art is used to generate a view of the “anatomical structure of the patient”. Although not originally disclosed to be a camera, Gregerson also discloses that “other imaging devices may also be utilized” (Gregerson, [0033]).
However, Gregerson does not disclose the imaging assembly explicitly a camera assembly. Gregerson does disclose that “other imaging devices may also be utilized” (Gregerson, [0033]), but does not provide specific examples.
Funda discloses an endoscopic camera as an imaging assembly used endoscopic surgical robots (Funda, Column 17 Lines 39 - 40).
It would have been obvious to combine the camera assembly in Funda with the surgical robot system of Gregerson because it provide images to the computer about the relative position of the surgeon’s instruments, the camera, and the patient’s anatomy (Funda, Column 19 Lines 14 - 18).
Futhermore, since Gergerson broadly discloses using an imaging assembly, it would have been obvious to try a camera assembly as taught by Funda for the imaging assembly because it is a known type of endoscopic imaging assembly that generates a view of the patient’s anatomical structure in the art. Therefore, one having ordinary skill would have a reasonable expectation of success using a camera assembly taught by Funda as the imaging assembly in Gregerson, and it would have been obvious to try. See MPEP 2143.
a memory holding executable instructions to control the robotic arms;
Gregerson discloses “…functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on as one or more instructions or code on a non-transitory computer-readable medium” (Gregerson, [0128]).
a controller configured to or programmed to execute instructions held in the memory to:
receive tissue contact constraint data;
Gregerson discloses “the image dataset may be stored electronically in a memory” [0106].
control the robotic arms in a manner to reduce possible damage to tissue in an area identified by the tissue contact constraint data;
Gregerson discloses “…inserting surgical tool(s) to reach a target position while minimizing damage to other tissue or organs of the patient) (Gregerson, [0038]).
define a constriction area relative to the tissue based on the tissue contact constraint data, the constriction area defines an area or region in which the robotic arms can operate with minimal risk of damage to the tissue;
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” (Gregerson, [0080]). The “constriction area” in the applicant’s claim is synonymous with the “‘virtual’ three-dimensional surface” (Gregerson, [0080]), and the “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” (Gregerson, [0080]).
define a depth allowance relative to the constriction area based on the tissue contact constraint data, the depth allowance defines a tissue depth relative to the constriction area in which one or more constraints are applied to limit movement of the robotic arms when one of the robotic arms is between the constriction area and the depth allowance;
Gregerson discloses “In a further embodiment operating mode of a robotic arm 101, the motion of the robotic arm 101 may be restricted such that the tip end 601 of the end effector 102 may not be moved within a pre-determined off-set distance from a defined target location, TL, inside the body of a patient 200. Put another way, the off-set distance may function as a stop on the motion of the end effector 102 towards the target location, TL. The stop may be located outside the body of the patient such that the end effector may be prevented from contacting the patient. In some embodiments, the robotic arm 101 may be movable (e.g., via hand guiding, autonomously, or both) in a limited range of motion such that the end effector 102 may only be moved along a particular trajectory towards or away from the patient 200 and may not be moved closer than the off-set distance from the target location, TL.” [0073].
halt movement of the robotic arms when one of the robotic arms goes past the depth allowance; and
Gregerson discloses “The boundary surface may correspond to the position of the patient in three-dimensional space and may prevent the robotic arm or any portion thereof, from colliding with the patient” [0076], which also prevents “the one or more robotic arms [from extending] past the depth allowance” (as specified by the applicant).
a display unit configured to display a view of the anatomical structure.
Gregerson discloses “The system may also include a display device as schematically illustrated in FIG. 1. The display device may display image data of the patient's anatomy obtained by the imaging device” (Gregerson, [0038]).
Regarding Claim 5:
The surgical robotic system of claim 1, wherein the controller is configured to or programmed to define the constriction area based on the tissue contact constraint data and a portion of an anatomical structure identified with a distal end of at least one of the robotic arms.
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” (Gregerson, [0080]). The “constriction area” in the applicant’s claim is synonymous with the “‘virtual’ three-dimensional surface” (Gregerson, [0080]), and the “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” (Gregerson, [0080]). Gregerson also discloses “…[an] optical sensor device includes a plurality of cameras mounted to an arm extending above the patient surgical area” (Gregerson, [0039]).
Regarding Claim 6:
The surgical robotic system of claim 5, wherein the display unit is configured to display an indicator when one of the robotic arms enters a space beyond the constriction area.
Gregerson discloses “An indicator, which may be provided on a display device or on the robotic arm itself (e.g., an LED indicator), may indicate the position of the robotic arm with respect to the virtual volume” (Gregerson, [0098]). The “virtual volume” is defined as “a virtual three-dimensional volume… defined over the surgical area of the patient” (Gregerson, [0096]). Gregerson additionally discloses “One surface of the volume may comprise a boundary surface over the skin surface of the patient through which the robotic arm may not enter,” (Gregerson, [0096]), which would make the “virtual volume” synonymous with “a space beyond the constriction area”.
Regarding Claim 7:
The surgical robotic system of claim 5, wherein the memory holds executable depth allowance instructions to define a depth allowance relative to the constriction area.
Gregerson discloses “…the boundary surface includes a first portion that corresponds to the three-dimensional contour of the skin surface of the patient” (Gregerson, [0084]).
Regarding Claim 9:
The surgical robotic system of claim 7, wherein the display unit is configured to display an indicator when the one of the robotic arms reaches the depth allowance.
Gregerson discloses “An indicator, which may be provided on a display device or on the robotic arm itself (e.g., an LED indicator), may indicate the position of the robotic arm with respect to the virtual volume” (Gregerson, [0098]). The “virtual volume” is defined as “a virtual three-dimensional volume… defined over the surgical area of the patient” (Gregerson, [0096]). Gregerson additionally discloses “the dimensions and/or shape of the virtual volume may be adjustable by the user. The user may adjust the dimensions and/or shape of the virtual volume based on the position(s) of potential obstruction(s) that the robotic arm could collide with during surgery”, (Gregerson, [0099]), specifying that an additional area/boundary can be set that would be synonymous with the “depth allowance”.
Regarding Claim 12:
The surgical robotic system of claim 5, wherein the constriction area is represented by a plane and the controller is configured or programmed to halt movement of the robotic arms on one side of the plane.
Gregerson discloses “the motion of the robotic arm may be restricted such that the tip end of the end effector may not be moved within a pre-determined off-set distance from a defined target location, TL, inside the body of a patient . Put another way, the off-set distance may function as a stop on the motion of the end effector towards the target location, TL. The stop may be located outside the body of the patient such that the end effector may be prevented from contacting the patient.” (Gregerson, [0073]).
Regarding Claim 23:
A surgical robotic system, comprising:
a robotic arm assembly having robotic arms;
Gregerson discloses “Although a single robotic arm is shown in FIGS. 1 and 2, it will be understood that two or more robotic arms may be utilized” (Gregerson, [0042]).
a camera assembly, wherein the camera assembly generates image data of an internal region of a patient; and
Gregerson discloses an “…image dataset of the patient's anatomy may be obtained using an imaging device” (Gregerson, [0044]). The specified “image dataset” from the Gregerson prior art is used to generate a view of the “anatomical structure of the patient”. Although not originally disclosed to be a camera, Gregerson also discloses that “other imaging devices may also be utilized” (Gregerson, [0033]).
However, Gregerson does not disclose the imaging assembly explicitly a camera assembly. Gregerson does disclose that “other imaging devices may also be utilized” (Gregerson, [0033]), but does not provide specific examples.
Funda discloses an endoscopic camera as an imaging assembly used endoscopic surgical robots (Funda, Column 17 Lines 39 - 40).
It would have been obvious to combine the camera assembly in Funda with the surgical robot system of Gregerson because it provide images to the computer about the relative position of the surgeon’s instruments, the camera, and the patient’s anatomy (Funda, Column 19 Lines 14 - 18).
Futhermore, since Gergerson broadly discloses using an imaging assembly, it would have been obvious to try a camera assembly as taught by Funda for the imaging assembly because it is a known type of endoscopic imaging assembly that generates a view of the patient’s anatomical structure in the art. Therefore, one having ordinary skill would have a reasonable expectation of success using a camera assembly taught by Funda as the imaging assembly in Gregerson, and it would have been obvious to try. See MPEP 2143.
a controller configured to or programmed to:
detect one or more markers in the image data;
Gregerson discloses “The motion tracking system 105 in the embodiment of FIG. 1 includes a plurality of marker devices 119, 202 and 115 and a stereoscopic optical sensor device 111 that includes two or more cameras (e.g., IR cameras). The optical sensor device 111 may include one or more radiation sources (e.g., diode ring(s)) that direct radiation (e.g., IR radiation) into the surgical field, where the radiation may be reflected by the marker devices 119, 202 and 115 and received by the cameras.” ([0035]).
receive tissue contact constraint data;
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” [0080]. The “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” [0080].
control movement of the robotic arms based on the one or more markers in the image data to reduce possible damage to tissue in an area identified by the tissue contact constraint data;
Gregerson discloses “…inserting surgical tool(s) to reach a target position while minimizing damage to other tissue or organs of the patient) (Gregerson, [0038]).
define a constriction area relative to tissue based on the tissue contact constraint data, the constriction area defines an area or region in which the robotic arms can operate with minimal risk of damage to the tissue;
Gregerson discloses “In block 703 of method 700, a boundary surface may be generated based on the identified skin surface of the patient. The boundary surface may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data” (Gregerson, [0080]). The “constriction area” in the applicant’s claim is synonymous with the “‘virtual’ three-dimensional surface” (Gregerson, [0080]), and the “tissue contact constraint data” in the applicant’s claim is synonymous with the “image data” (Gregerson, [0080]).
define a depth allowance relative to the constriction area based on the tissue contact constraint data, the depth allowance defines a tissue depth relative to the constriction area in which one or more constraints are applied to limit movement of the robotic arms when one of the robotic arms is between the constriction area and the depth allowance;
Gregerson discloses “In a further embodiment operating mode of a robotic arm 101, the motion of the robotic arm 101 may be restricted such that the tip end 601 of the end effector 102 may not be moved within a pre-determined off-set distance from a defined target location, TL, inside the body of a patient 200. Put another way, the off-set distance may function as a stop on the motion of the end effector 102 towards the target location, TL. The stop may be located outside the body of the patient such that the end effector may be prevented from contacting the patient. In some embodiments, the robotic arm 101 may be movable (e.g., via hand guiding, autonomously, or both) in a limited range of motion such that the end effector 102 may only be moved along a particular trajectory towards or away from the patient 200 and may not be moved closer than the off-set distance from the target location, TL.” [0073].
halt movement of the robotic arms when one of the robotic arms goes past the depth allowance; and
Gregerson discloses a “…boundary surface [that] may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data.” [0080]. The “skin surface ” in the prior art is the synonymous with “floor” in the application. Gregerson also discloses “The boundary surface may correspond to the position of the patient in three-dimensional space and may prevent the robotic arm or any portion thereof, from colliding with the patient” [0076], which also prevents “the one or more robotic arms [from extending] past the depth allowance” (as specified by the applicant).
store the image data.
Gregerson discloses “the image dataset may be stored electronically in a memory” [0106].
Regarding Claim 27:
The surgical robotic system of claim 1, wherein the constriction area is a three- dimensional volume or a two-dimensional plane.
Gregerson discloses a “…boundary surface [that] may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data.” [0080].
Regarding Claim 28:
The surgical robotic system of claim 27, wherein the three-dimensional volume can be defined as a cube, cone, cylinder, or other three-dimensional shape or combination of shapes.
Gregerson discloses a “The virtual volume 811 may extend from the boundary surface 801 away from the patient 200. The virtual volume 811 may have any size and shape, and may have, for example, a generally cylindrical, cuboid, conical, pyramidal or irregular shape.” [0096].
Regarding Claim 29:
The surgical robotic system of claim 1, wherein the tissue contact constraint data includes a two-dimensional or three-dimensional model that corresponds to the location of one or more selected anatomical structures of the patient to be protected with surfaces of interest that are segmented by sensitivity to contact.
Gregerson discloses a “…boundary surface [that] may correspond directly with the skin surface identified within the image data or may be a “virtual” three-dimensional surface that may be offset from the actual skin surface by some amount (e.g., 0-2 cm). In some embodiments, a portion of the boundary surface may be generated by extrapolating from the image data.” [0080].
Regarding Claim 30:
The surgical robotic system of claim 1, wherein the constriction area can correspond to a tissue surface defining a floor at a specified distance above or below the tissue surface.
Gregerson discloses “A target location along the unique trajectory may be defined based on a pre-determined offset distance from the tip end of the tool. ” (Gregerson, [0064]).
Regarding Claim 31:
The surgical robotic system of claim 1, wherein the one or more constraints applied to limit movement include speed of movement of one or both of the robotic arms, torque reduction in one or both of the robotic arms, linear or rotational movement of one or both of the robotic arms.
Gregerson discloses “In some embodiments, the robotic arm 101 may be movable (e.g., via hand guiding, autonomously, or both) in a limited range of motion such that the end effector 102 may only be moved along a particular trajectory towards or away from the patient 200.” ([0073]).
Regarding Claim 32:
The surgical robotic system of claim 1, wherein the tissue contact constraint data may be updated based on image data of a patient's surgical area or tissue identified within the surgical area.
Gregerson discloses “In block 705, the image dataset and the boundary surface may be registered within a patient coordinate system. The registration may be performed using a method such as described above with reference to FIG. 3. In particular, the image dataset and the generated boundary surface may be correlated with the patient position which may be determined using a motion tracking system 105 as described above.” ([0082]).
Regarding Claim 33:
The surgical robotic system of claim 1, wherein a vector can be configured to point to the side of the constriction area where a portion of the robotic arms is allowed to travel.
Gregerson discloses “The graphical depiction may also indicate a trajectory defined by the object (e.g., a ray extending from a tip end of the object into the patient) and/or a target point within the patient's anatomy that may be defined based on the position and/or orientation of one or more objects being tracked. In various embodiments, the tool 104 may be a pointer.” ([0050]).
Regarding Claim 34:
The surgical robotic system of claim 33, wherein the portion of the robotic arms is permitted to move in a direction located on a first side of the constriction area.
Gregerson discloses “Similar to the embodiment of FIGS. 6A-6B, the robotic control system 405 may control the motion of the robotic arm 101 such that the tip end 601 of the end effector 102 is maintained pointed along a trajectory 611, 612 that intersects with the target location, TL.” ([0075]).
Regarding Claim 35:
The surgical robotic system of claim 34, wherein the portion of the robotic arms can be prohibited from moving in a direction located on second side of the constriction area.
Gregerson discloses “In some embodiments, the robotic arm 101 may be movable (e.g., via hand guiding, autonomously, or both) in a limited range of motion such that the end effector 102 may only be moved along a particular trajectory towards or away from the patient 200.” ([0073]).
Claims 24 – 25 are rejected under 35 U.S.C. 103 as being unpatentable over Gregerson, et. al. (US 20200078097 A1), hereinafter referred to as Gregerson, in view of Tsuji, et. al. (US 20060276686 A1), hereinafter referred to as Tsuji, further in view of Funda, et. al. (US 6201984 B1), hereinafter referred to as Funda.
Regarding Claim 24:
The surgical robotic system of claim 23, wherein the controller is configured to or programmed to control the robotic arms to place the one or more markers.
Tsuji discloses a treatment instrument insertion port in the endoscope with which “placement markers Mi/n can be placed using grasping forceps” (Tsuji, [0037]). The “placement marker” in the prior art is a “marker… capable of transmitting an electromagnetic wave” (Tsuji, Claim 25).
Regarding Claim 25:
The surgical robotic system of claim 23, wherein the markers include an X shape, quick response (QR) code markings, reflective tape, reflective film, stickers, cloth, staples, tacks, LED objects, or emitters.
Tsuji discloses “A marker capable of transmitting an electromagnetic wave is placed at a target portion in an affected area and the like” (Tsuji, Abstract). The marker specified in the prior art is considered an “emitter”.
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Tsuji, et. al. (US 20060276686 A1), hereinafter referred to as Tsuji, in view of Funda, et. al. (US 6201984 B1), hereinafter referred to as Funda, as applied to claim 23 above, and further in view of Weir (US 20170189127 A1), hereinafter referred to as Weir.
Regarding Claim 26:
The surgical robotic system of claim 23, wherein the controller is configured to or programmed to define a threshold distance relative to the one or more markers, and vary the speed of movement of at least one of the robotic arms when one of the robotic arms is disposed relative to the one or more markers at a distance that is less than the threshold distance, such that one of the robotic arms is automatically placed adjacent to the one or more markers.
Tsuji discloses “A marker capable of transmitting an electromagnetic wave is placed at a target portion in an affected area and the like” (Tsuji, Abstract). Although Tsuji does not disclose a controller “configured to or programmed to define a threshold distance relative to the one or more markers”.
However, Weir discloses “If contact is determined as being imminent, the controller can cause automatic movement of a portion of the robotic surgical system to help prevent the impending collision and/or mitigate adverse effects of the collision should it occur” (Weir, [0169]). The prior art specifies an “automatic movement” that is set to occur when contact is “imminent”. The prior art additionally specifies that “the controller can thus be configured to determine an imminent collision based on a pattern of mechanism extension signals,” (Weir, [0153]).
It would be obvious to one of ordinary skill in the art to combine the controller disclosed by Weir with the robotic system disclosed by Tsuji in view of Funda because Weir discloses “the controller can cause automatic movement of a portion of the robotic surgical system to help prevent the impending collision and/or mitigate adverse effects of the collision should it occur” (Weir, [0169]). One of ordinary skill in the art would want to prevent damage to both the surgical instruments and the patient(Weir, [0066]; “…movement of one or more of the surgical instruments by the robotic surgical system can risk collision between objects, such as between a part of the robotic surgical system (e.g., a part moving to move the surgical instrument) and a non-moving one of the surgical instruments, another part of the robotic surgical system, a mounted light illuminating the surgical area, a surgical table, etc., which can risk damage to one or both of the colliding objects and/or can risk harming the patient”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES B CHIN whose telephone number is (571)272-4634. The examiner can normally be reached Monday - Friday | 9:00 AM to 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.B.C./
Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656