DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 05, 2025 has been entered.
Response to Amendment
The Amendment filed November 05, 2025 has been entered. Claims 1-10 and 12-21 remain pending in the application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-9 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Junio (US 2022/0270263 A1) (“Junio”) in view of Thienphrapa et al. (US 2014/0212025 A1) (“Thienphrapa”).
Regarding claims 1 and 13, Junio discloses A system for tracking one or more objects comprising (Abstract and entire document):
an image device (FIG. 1, imaging device(s) 112);
a processor (FIG. 1, [0047], [0056], processor 104); and
a memory storing data for processing by the processor, the data, when processed, causing the processor to (FIG. 1 [0056], memory 106):
receive at least one image from the imaging device at each of a plurality of poses to form a set of images, the set of images depicting one or more objects, the one or more objects comprising a surgical tool ([0062], “The imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.). “Image data” as used herein refers to the data generated or captured by an imaging device 112,” and [0067], “In some embodiments, reference markers (i.e., navigation markers) may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.” And [0075] discussing poses and set of images and [0068], “In various embodiments, the navigation system 118 may be used to track a position and orientation (i.e., pose) of the imaging device 112, the robot 114 and/or robotic arm 116, and/or one or more surgical tools”):
receive pose information for each of the plurality of poses ([0082] – [0084] discussing pose information):
input the set of images and the pose information into a reconstruction model, the reconstruction model configured to generate a three-dimensional representation of the one or more objects based on the set of images and the pose information receive, from the reconstruction model, the three-dimensional representation of the one or more objects ([0084], “Also in some embodiments, the segmenting of the second set of anatomical elements from the plurality of 2D images in the step 216 is based on the 3D image received in the step 204, and/or on the segmenting of the first set of anatomical elements in the step 208. For example, the segmenting may comprise orienting the 3D image to reflect a pose of the imaging device used to capture one of the plurality of 2D images, and then determining which anatomical elements are visible with the 3D image in that pose.”),
wherein the three-dimensional representation of the one or more objects comprises a three-dimensional representation of the surgical tool ([0067], “In some embodiments, reference markers (i.e., navigation markers) may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.”);
input the three-dimensional representation of the one or more objects into a segmenting model, the segmenting model configured to segment the three-dimensional representation of the one or more objects into one or more segmented objects ([0076], “The segmenting may comprise identifying both the target anatomical elements as well as the incidental anatomical elements with different and/or unique identifiers (e.g., tags, labels, highlighting, etc.). For example, the target anatomical elements may be highlighted in a first color (e.g., when displayed on a user interface), while the incidental or extraneous anatomical elements may be highlighted in a second color different from the first color.” Segmenting the one or more objects and the anatomical element);
track one or more characteristics of the surgical tool using the segmented version of the surgical tool ([0067], “In some embodiments, reference markers (i.e., navigation markers) may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.”); and
generate instructions for controlling a robot to move the surgical tool based on the tracked one or more characteristics of the surgical tool ([0088], “The registration algorithm may transform, map, or create a correlation between the 3D image and/or components thereof and each of the plurality of 2D images, which may then be used by a system (e.g., a system 100) and/or one or more components thereof (e.g., a navigation system 118) to translate one or more coordinates in the patient coordinate space to one or more coordinates in a coordinate space of a robot (e.g., a robot 114) and/or vice versa. As previously noted, the registration may comprise registering between a 3D image (e.g., a CT scan) and one or more 2D images (e.g., fluoroscopy images) and/or vice versa, and/or between a 2D image and another 2D image and/or vice versa.” And [0091], “Once completed, the registration is useful, for example, to facilitate a surgery or surgical task (e.g., controlling a robot and/or robotic arm with patient anatomy and/or providing image-based guidance to a surgeon).”).
Junio as modified fails to disclose receive, from the segmenting model, the one or more segmented objects comprising a segmented version of the surgical tool;
However, in the same field of endeavor, Thienphrapa teaches receive, from the segmenting model, the one or more segmented objects comprising a segmented version of the surgical tool ([0035], “In one embodiment, the instrument detection in 3D ultrasound of block 202 may employ an image-based method. Surgical instruments are commonly composed of material, e.g., metal, which is visible under ultrasound imaging. Furthermore, the surgical instruments may feature a longitudinally disposed axis (a shaft or the like) to permit a surgeon or robot to operate in confined spaces. Using the elongated nature of the surgical instrument or other distinctive features, a reference is provided, which an imaging algorithm can exploit to properly detect the instrument in an ultrasound image in an automated fashion. One existing technique/algorithm is the known Hough Transform, which can segment a surgical instrument from an image based on knowledge of its shape.”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system/device as taught by Junio to include receive, from the segmenting model, the one or more segmented objects comprising a segmented version of the surgical tool as taught by Thienphrapa to segment instrument from the body ([0035]).
Regarding claims 2 and 14, Junio as modified discloses The system of claim 1, Junio further discloses wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive an updated three-dimensional representation; input the updated three-dimensional representation into the segmenting model; and receive, from the segmenting model, one or more updated segmented objects that replace the one or more segmented objects ([0062], “The image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some embodiments, a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time. The imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data.” Images are pre op, intra op and post op. see also [0090], “In this manner, the 3D image may not only be registered to the one or more of the plurality of 2D images, but also be updated to reflect a current (e.g., intraoperative) pose of the patient.”).
Regarding claims 3 and 15, Junio as modified discloses The system of claim 2, Junio further discloses wherein tracking comprises comparing the one or more characteristics of the one or more segmented objects to one or more updated characteristics of the one or more updated segmented objects ([0093], “In still further embodiments, the registration may be performed one or more times intraoperatively (e.g., during surgery) to update, adjust, and/or refresh the current registration. For example, a new 3D image and/or a new plurality of 2D images may be captured intraoperatively, and a new registration may be completed therefrom (e.g., using a preoperative 3D image and a new plurality of intraoperative 2D images, a new intraoperative 3D image and a new plurality of 2D images, or otherwise). An updated registration may be required, for example, if a pose of the patient changes or is changed during the course of a surgical procedure.”).
Regarding claims 4 and 16-17, Junio as modified discloses The system of claim 1, Junio further discloses wherein the one or more segmented objects include an anatomical element ([0062], “The imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.). “Image data” as used herein refers to the data generated or captured by an imaging device 112,”).
Regarding claim 5, Junio as modified discloses The system of claim 4, Junio further discloses wherein the anatomical element is at least one of a tumor, a biological hardening, and a vertebrae ([0050 – 0053] vertebrae).
Regarding claim 6, Junio as modified discloses The system of claim 1, Junio further discloses wherein the image device comprises an ultrasound image device ([0025 -0026]).
Regarding claims 7 and 19, Junio as modified discloses The system of claim 1, Junio further discloses wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive an updated three-dimensional representation; input the updated three-dimensional representation into the segmenting model; receive, from the segmenting model, one or more updated segmented objects each having one or more updated characteristics; compare the one or more updated characteristics to the one or more characteristics; and replace at least one of the one or more segmented objects with a corresponding updated segmented object when the at least one of the one or more characteristics is different than the corresponding updated characteristic ([0062], “The image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some embodiments, a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time. The imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data.” Images are pre op, intra op and post op. see also [0090], “In this manner, the 3D image may not only be registered to the one or more of the plurality of 2D images, but also be updated to reflect a current (e.g., intraoperative) pose of the patient.” And [0093], “In still further embodiments, the registration may be performed one or more times intraoperatively (e.g., during surgery) to update, adjust, and/or refresh the current registration. For example, a new 3D image and/or a new plurality of 2D images may be captured intraoperatively, and a new registration may be completed therefrom (e.g., using a preoperative 3D image and a new plurality of intraoperative 2D images, a new intraoperative 3D image and a new plurality of 2D images, or otherwise). An updated registration may be required, for example, if a pose of the patient changes or is changed during the course of a surgical procedure.”).
Regarding claim 8, Junio as modified discloses The system of claim 1, Junio further discloses wherein the reconstruction model generates the three-dimensional representations of the one or more objects based on the at least one image, the pose information, a preoperative image, and preoperative pose information ([0062], “The image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some embodiments, a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time. The imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data.” Images are pre op, intra op and post op. see also [0090], “In this manner, the 3D image may not only be registered to the one or more of the plurality of 2D images, but also be updated to reflect a current (e.g., intraoperative) pose of the patient.” And [0093], “In still further embodiments, the registration may be performed one or more times intraoperatively (e.g., during surgery) to update, adjust, and/or refresh the current registration. For example, a new 3D image and/or a new plurality of 2D images may be captured intraoperatively, and a new registration may be completed therefrom (e.g., using a preoperative 3D image and a new plurality of intraoperative 2D images, a new intraoperative 3D image and a new plurality of 2D images, or otherwise). An updated registration may be required, for example, if a pose of the patient changes or is changed during the course of a surgical procedure.”).
Regarding claim 9, Junio as modified discloses The system of claim 1, Junio further discloses wherein the one or more characteristics are at least one of size, shape, position, or orientation ([0083] shape, and [0073], “The 3D image may depict a 3D pose (e.g., position and orientation) of a patient's anatomy or portion thereof. In some embodiments, the 3D image may be captured preoperatively (e.g., before surgery) and may be stored in a system (e.g., a system 100) and/or one or more components thereof (e.g., a database 130). The stored 3D image may then be received (e.g., by a processor 104), as described above, preoperatively (e.g., before the surgery) and/or intraoperatively (e.g., during surgery).”).
Regarding claims 12 and 18, Junio as modified discloses The system of claim 1, Junio further discloses wherein the segmenting model identifies the one or more objects within the three-dimensional representation and generates a separate three-dimensional representation of each of the one or more objects (the objects are segmented from the anatomical elements and each other, such that it is interpreted they are each a separate representation.).
Regarding claim 20, the same rejection applied to claims 1 and 13 apply and Junio further discloses the system comprising a robot arm ([0064], “The robot 114 may be any surgical robot or surgical robotic system. The robot 114 may be or comprise, for example, the Mazor X™ Stealth Edition robotic guidance system. The robot 114 may be configured to position the imaging device 112 at one or more precise position(s) and orientation(s), and/or to return the imaging device 112 to the same position(s) and orientation(s) at a later point in time. The robot 114 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 118 or not) to accomplish or to assist with a surgical task.”).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Junio in view of Thienphrapa in further view of Mwikirize et al. (US 11426142 B2) (“Mwikirize”).
Regarding claim 10, Junio as modified discloses The system of claim 1,
Junio as modified fails to disclose wherein the one or more objects comprises a surgical balloon and the one or more characteristics comprises a size of the surgical balloon.
However, in the same field of endeavor, Mwikirize teaches wherein the one or more objects comprises a surgical balloon and the one or more characteristics comprises a size of the surgical balloon (Col. 3 lines 40-56, “Although the systems and methods of the present disclosure provide for the localization of a needle, the systems and methods are not limited to localizing needles. In particular, the systems and methods can be used to localize any type of medical device or surgical instrument, including, but not limited to, forceps, stainless steel rods with conical tips, catheters, guidewires, radio frequency ablation electrodes, balloons (during angioplasty/stenting procedures), or any other medical devices.” FIG. 1 discusses identifying needle with segmentation of ultrasound, thus the balloon can replace the needle as discusses above. Col. 12 lines 33 – 50 further discusses identifying the size of the object).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system/device as taught by Junio as modified to include wherein the one or more objects comprises a surgical balloon and the one or more characteristics comprises a size of the surgical balloon as taught by Mwikirize to locate the devices accurately (Col. 3 lines 57-66, “Third, the systems and methods can accurately localize a needle tip in a computationally fast way with a high degree of accuracy.”).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Junio in view of Thienphrapa in further view of Grabner et al. (US 2019/0147221 A1) (“Grabner”).
Regarding claim 21, Junio as modified discloses The system of claim 1, Junio as modified fails to disclose wherein the reconstruction model generates the three- dimensional representation of the one or more objects by determining a surface representation to define a shape of the one or more objects, wherein the surface representation is a virtual mesh comprising a set of polygonal faces connected by at least one edge of a plurality of edges and at least one vertex of a plurality of vertices.
However, in the same field of endeavor, Grabner teaches wherein the reconstruction model generates the three- dimensional representation of the one or more objects by determining a surface representation to define a shape of the one or more objects, wherein the surface representation is a virtual mesh comprising a set of polygonal faces connected by at least one edge of a plurality of edges and at least one vertex of a plurality of vertices ([0052] – 0062] discussing the virtual meshes/polygonal faces, see [0062], “For example, a 3D model of an airplane can be defined by a triangle mesh or a polygon mesh that includes a collection of vertices, edges, and faces defining the shape of the airplane.” And [0008], “In other examples, the 3D mesh of the 3D model can be used for 3D scene understanding, object grasping (e.g., in robotics, surgical applications, and/or other suitable applications), object tracking, scene navigation, and/or other suitable applications.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system/device as taught by Junio as modified to include wherein the reconstruction model generates the three- dimensional representation of the one or more objects by determining a surface representation to define a shape of the one or more objects, wherein the surface representation is a virtual mesh comprising a set of polygonal faces connected by at least one edge of a plurality of edges and at least one vertex of a plurality of vertices as taught by Grabner to represent an object for performing operation ([0008]).
Response to Arguments
Applicant’s arguments with respect to claims 1-10 and 12-21 have been considered but are moot because the new ground of rejection does not solely rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A TOMBERS whose telephone number is (571)272-6851. The examiner can normally be reached on M-TH 7:00-16:00, F 7:00-11:00(Eastern).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Chen can be reached on 571-272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH A TOMBERS/Examiner, Art Unit 3791