Prosecution Insights
Last updated: April 19, 2026
Application No. 18/978,615

TRACKING SYSTEMS AND METHODS FOR IMAGE-GUIDED SURGERY

Non-Final OA §102§DP
Filed
Dec 12, 2024
Examiner
BRUCE, FAROUK A
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Augmedics Ltd.
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
4y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
93 granted / 200 resolved
-23.5% vs TC avg
Strong +37% interview lift
Without
With
+37.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
58 currently pending
Career history
258
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 200 resolved cases

Office Action

§102 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 41, 43-44, and 51-54 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 6, 11, 14, 15, 19 and 24 of U.S. Patent No. 11766296. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application. Instant Application U.S. Patent No. 11766296 41. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking, using a first tracking device from a first line of sight, at least a portion of the tool and a patient marker that is configured to be placed upon the body of the patient, wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display; tracking, using a second tracking device from a second line of sight, at least the portion of the tool and the patient marker, wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person; and using at least one computer processor, generating an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 1. A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking the tool and a patient marker that is placed upon the patient's body from a first line of sight, using a first tracking device that is disposed upon a first head-mounted device that is worn by a first person, the first head-mounted device including a first head-mounted display; tracking the tool and the patient marker, from a second line of sight, using a second tracking device; 6. The method according to claim 1, wherein the second tracking device is disposed upon a second head-mounted device that is worn by a second person. and using at least one computer processor for: generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image. 42. (New) The method of Claim 41, wherein the tracking, using the first tracking device from the first line of sight, at least the portion of the tool comprises tracking a tool marker. 2. The method according to claim 1, wherein tracking the tool comprises tracking a tool marker. 43. (New) The method of Claim 42, wherein the tracking, using the second tracking device from the second line of sight, at least the portion of the tool comprises tracking the tool marker. 2. The method according to claim 1, wherein tracking the tool comprises tracking a tool marker. 44. (New) The method of Claim 41, wherein the second head-mounted device includes a second head-mounted display, and the method further comprises generating a further augmented reality image upon the second head-mounted display. 6. The method according to claim 1, wherein the second tracking device is disposed upon a second head-mounted device that is worn by a second person. 41. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking, using a first tracking device from a first line of sight, at least a portion of the tool and a patient marker that is configured to be placed upon the body of the patient, wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display; tracking, using a second tracking device from a second line of sight, at least the portion of the tool and the patient marker, wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person; and using at least one computer processor, generating an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 11. A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking the tool and a patient marker that is placed upon the patient's body from a first line of sight, using a first tracking device that is disposed upon a first head-mounted device that is worn by a first person, the first head-mounted device including a first head-mounted display; tracking the tool and the patient marker, from a second line of sight, using a second tracking device; and using at least one computer processor for: generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image. 51. (New) A system for use with a tool configured to be placed within a portion of a body of a patient, the system comprising: a patient marker configured to be placed upon the patient's body; a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display, and a first tracking device that is configured to track at least a portion of the tool and the patient marker from a first line of sight; a second head-mounted device configured to be worn by a second person, the second head-mounted device comprising a second tracking device that is configured to track at least the portion of the tool and the patient marker from a second line of sight; and at least one computer processor configured to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 14. Apparatus for use with a tool configured to be placed within a portion of a body of a patient, the apparatus comprising: a first head-mounted device comprising a first head-mounted display, and a first tracking device that is configured to track the tool and a patient marker from a first line of sight, wherein the patient marker is configured to be placed upon the patient's body; a second tracking device that is configured to track the tool and the patient marker from a second line of sight; and 19. The apparatus according to claim 14, wherein the first head-mounted device is configured to be worn by a first person, the apparatus further comprising a second head-mounted device that is configured to be worn by a second person, and wherein the second tracking device is disposed upon the second head-mounted device. at least one computer processor configured: to generate the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image. 52. (New) The system of Claim 51, wherein the tool includes a tool marker, and wherein the first tracking device is configured to track the portion of the tool by tracking the tool marker. 15. The apparatus according to claim 14, wherein the tool includes a tool marker, and wherein the first and second tracking devices are configured to track the tool by tracking the tool marker. 53. (New) The system of Claim 52, wherein the second tracking device is configured to track the portion of the tool by tracking the tool marker. 15. The apparatus according to claim 14, wherein the tool includes a tool marker, and wherein the first and second tracking devices are configured to track the tool by tracking the tool marker. 54. (New) The system of Claim 51, wherein the second head-mounted device comprises a second head-mounted display, and wherein the at least one computer processor is further configured to generate a further augmented reality image upon the second head-mounted display. 19. The apparatus according to claim 14, wherein the first head-mounted device is configured to be worn by a first person, the apparatus further comprising a second head-mounted device that is configured to be worn by a second person, and wherein the second tracking device is disposed upon the second head-mounted device. 51. (New) A system for use with a tool configured to be placed within a portion of a body of a patient, the system comprising: a patient marker configured to be placed upon the patient's body; a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display, and a first tracking device that is configured to track at least a portion of the tool and the patient marker from a first line of sight; a second head-mounted device configured to be worn by a second person, the second head-mounted device comprising a second tracking device that is configured to track at least the portion of the tool and the patient marker from a second line of sight; and at least one computer processor configured to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 24. Apparatus for use with a tool configured to be placed within a portion of a body of a patient, the apparatus comprising: a first head-mounted device comprising a first head-mounted display, and a first tracking device that is configured to track the tool and a patient marker from a first line of sight, wherein the patient marker is configured to be placed upon the patient's body; a second tracking device that is configured to track the tool and the patient marker from a second line of sight; and at least one computer processor configured: to generate the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image; Claims 41-45 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5, 14-15, 17 and 18 of U.S. Patent No. 11980429 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application. Instant Application U.S. Patent No. 11980429 B2 41. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking, using a first tracking device from a first line of sight, at least a portion of the tool and a patient marker that is configured to be placed upon the body of the patient, wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display; tracking, using a second tracking device from a second line of sight, at least the portion of the tool and the patient marker, wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person; and using at least one computer processor, generating an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 1. A method of generating images for augmented reality surgery using multiple tracking devices, the method comprising: determining a first position of a tool with respect to an anatomy of a patient using data from tracking of both a patient marker and the tool by a first tracking device of a first head-mounted device, wherein the first head-mounted device comprises a first head-mounted display, and wherein the first tracking device comprises a first camera; determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera 2. The method of claim 1, wherein the second tracking device is part of a second head-mounted device, and wherein the second head-mounted device comprises a second head-mounted display. generating, by at least one computer processor using the determined first position of the tool with respect to the anatomy of the patient, a first augmented reality image for display by the first head-mounted display, wherein the first augmented reality image comprises a virtual image of the tool aligned with a virtual image of the anatomy of the patient; 42. (New) The method of Claim 41, wherein the tracking, using the first tracking device from the first line of sight, at least the portion of the tool comprises tracking a tool marker. 5. The method of claim 1, wherein the tracking of the tool comprises tracking a tool marker. 43. (New) The method of Claim 42, wherein the tracking, using the second tracking device from the second line of sight, at least the portion of the tool comprises tracking the tool marker. 5. The method of claim 1, wherein the tracking of the tool comprises tracking a tool marker. 44. (New) The method of Claim 41, wherein the second head-mounted device includes a second head-mounted display, and the method further comprises generating a further augmented reality image upon the second head-mounted display. 2. The method of claim 1, wherein the second tracking device is part of a second head-mounted device, and wherein the second head-mounted device comprises a second head-mounted display. 45. (New) The method of Claim 41, wherein the first tracking device comprises a first camera configured to image at least the portion of the tool and the patient marker. 1. …wherein the first tracking device comprises a first camera… 46. (New) The method of Claim 45, wherein the second tracking device comprises a second camera configured to image at least the portion of the tool and the patient marker. 1. …determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera… 41. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising: tracking, using a first tracking device from a first line of sight, at least a portion of the tool and a patient marker that is configured to be placed upon the body of the patient, wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display; tracking, using a second tracking device from a second line of sight, at least the portion of the tool and the patient marker, wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person; and using at least one computer processor, generating an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 14. A method of generating images for augmented reality surgery using multiple tracking devices, the method comprising: determining a first position of a first head-mounted device with respect to anatomy of a patient, and a first position of a tool with respect to the anatomy of the patient, using data from tracking of a patient marker and the tool by a first tracking device of the first head-mounted device, wherein the first head-mounted device comprises a first head-mounted display; determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device; 15. The method of claim 14, wherein the second tracking device is part of a second head-mounted device, and wherein the second head-mounted device comprises a second head-mounted display. generating, by at least one computer processor, for display by the first head-mounted display, a first augmented reality image, the first augmented reality image comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, and the first augmented reality image being aligned with actual patient anatomy based upon the determined first position of the first head-mounted device. 42. (New) The method of Claim 41, wherein the tracking, using the first tracking device from the first line of sight, at least the portion of the tool comprises tracking a tool marker. 17. The method of claim 14, wherein the tracking of the tool comprises tracking a tool marker. 43. (New) The method of Claim 42, wherein the tracking, using the second tracking device from the second line of sight, at least the portion of the tool comprises tracking the tool marker. 17. The method of claim 14, wherein the tracking of the tool comprises tracking a tool marker. 44. (New) The method of Claim 41, wherein the second head-mounted device includes a second head-mounted display, and the method further comprises generating a further augmented reality image upon the second head-mounted display. 15. The method of claim 14, wherein the second tracking device is part of a second head-mounted device, and wherein the second head-mounted device comprises a second head-mounted display. 45. (New) The method of Claim 41, wherein the first tracking device comprises a first camera configured to image at least the portion of the tool and the patient marker. 18. The method of claim 14, wherein the first tracking device further comprises at least one of a light source and a camera. Claims 51-56 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5, 13, 15, and 18 of U.S. Patent No. 12201384 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application. Instant Application U.S. Patent No. 12201384 B2 51. (New) A system for use with a tool configured to be placed within a portion of a body of a patient, the system comprising: a patient marker configured to be placed upon the patient's body; a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display, and a first tracking device that is configured to track at least a portion of the tool and the patient marker from a first line of sight; a second head-mounted device configured to be worn by a second person, the second head-mounted device comprising a second tracking device that is configured to track at least the portion of the tool and the patient marker from a second line of sight; and at least one computer processor configured to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 1. A tracking system for image-guided surgery, the tracking system comprising: a patient marker comprising an array of elements configured to be visible to the first camera of the first tracking device to enable determination of a location and orientation of the first head-mounted device with respect to the patient marker; a first head-mounted device to be worn by a surgeon performing a surgical procedure, the first head-mounted device comprising a first display and a first tracking device that comprises a first camera; a second tracking device that is separate from the first head-mounted device, the second tracking device comprising a second camera, wherein the first camera and the second camera are each configured to capture images of at least a portion of a tool used in the surgical procedure; and at least one computer processor configured to: generate for display by the first display, using the tracked location and orientation of the tool, augmented reality images comprising a virtual image of the tool aligned with a virtual image of anatomy of a patient; generate for display by the first display, using the tracked location and orientation of the tool using the data from the second tracking device, augmented reality images comprising a virtual image of the tool aligned with a virtual image of anatomy of a patient. 52. (New) The system of Claim 51, wherein the tool includes a tool marker, and wherein the first tracking device is configured to track the portion of the tool by tracking the tool marker. 5. The tracking system of claim 1, wherein the tool comprises a tool marker, and wherein the first camera and the second camera are each configured to capture images of at least the tool marker of the tool. 53. (New) The system of Claim 52, wherein the second tracking device is configured to track the portion of the tool by tracking the tool marker. 5. The tracking system of claim 1, wherein the tool comprises a tool marker, and wherein the first camera and the second camera are each configured to capture images of at least the tool marker of the tool. 54. (New) The system of Claim 51, wherein the second head-mounted device comprises a second head-mounted display, and wherein the at least one computer processor is further configured to generate a further augmented reality image upon the second head-mounted display. 2. The tracking system of claim 1, further comprising a second head-mounted, the second head-mounted device comprising a second display and the second tracking device. 3. The tracking system of claim 2, wherein the at least one computer processor is further configured to generate augmented reality images for display by the second display. 55. (New) The system of Claim 51, wherein the first tracking device comprises a first camera. 1. …the first head-mounted device comprising a first display and a first tracking device that comprises a first camera;… 56. (New) The system of Claim 55, wherein the second tracking device comprises a second camera. 1. …the second tracking device comprising a second camera… 51. (New) A system for use with a tool configured to be placed within a portion of a body of a patient, the system comprising: a patient marker configured to be placed upon the patient's body; a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display, and a first tracking device that is configured to track at least a portion of the tool and the patient marker from a first line of sight; a second head-mounted device configured to be worn by a second person, the second head-mounted device comprising a second tracking device that is configured to track at least the portion of the tool and the patient marker from a second line of sight; and at least one computer processor configured to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body. 13. A tracking system for image-guided surgery, the tracking system comprising: a patient marker comprising an array of elements configured to be visible to the first camera of the first tracking device to enable determination of a location and orientation of the first head-mounted device with respect to the patient marker; a first head-mounted device to be worn by a surgeon performing a surgical procedure, the first head-mounted device comprising a first display and a first tracking device that comprises a first camera; a second tracking device that is separate from the first head-mounted device, the second tracking device comprising a second camera, wherein the first camera and the second camera are each configured to capture images of at least a portion of a tool used in the surgical procedure; and at least one computer processor configured to: generate, for display by the first display, augmented reality images comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, the augmented reality images being aligned with actual patient anatomy based upon the tracking of the location and orientation of the first head-mounted device with respect to the patient marker; and generate, for display by the first display, augmented reality images comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, the augmented reality images being aligned with actual patient anatomy. 52. (New) The system of Claim 51, wherein the tool includes a tool marker, and wherein the first tracking device is configured to track the portion of the tool by tracking the tool marker. 18. The tracking system of claim 13, wherein the tool comprises a tool marker, and wherein the first camera and the second camera are each configured to capture images of at least the tool marker of the tool. 53. (New) The system of Claim 52, wherein the second tracking device is configured to track the portion of the tool by tracking the tool marker. 18. The tracking system of claim 13, wherein the tool comprises a tool marker, and wherein the first camera and the second camera are each configured to capture images of at least the tool marker of the tool. 54. (New) The system of Claim 51, wherein the second head-mounted device comprises a second head-mounted display, and wherein the at least one computer processor is further configured to generate a further augmented reality image upon the second head-mounted display. 15. The tracking system of claim 13, further comprising a second head-mounted, the second head-mounted device comprising a second display and the second tracking device. 16. The tracking system of claim 15, wherein the at least one computer processor is further configured to generate augmented reality images for display by the second display. 55. (New) The system of Claim 51, wherein the first tracking device comprises a first camera. 1. …a first tracking device that comprises a first camera… 56. (New) The system of Claim 55, wherein the second tracking device comprises a second camera. 1. …a second tracking device that is separate from the first head-mounted device, the second tracking device comprising a second camera… Claim Objections Claim 51 is objected to because of the following informalities: Line 11 of claim 51 should be amended to include a colon as such: ---at least one computer processor configured to: generate---. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 41-60 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lang, P. K., US 20170258526 A1. Regarding claim 41, Lang teaches a method for use with a tool configured to be placed within a portion of a body of a patient ([0004] states that “Aspects of the invention provides, among other things, for a simultaneous visualization of live data of the patient, e.g. a patient's spine or joint, and digital representations of virtual data such as virtual cuts and/or virtual surgical guides including cut blocks or drilling guides through an optical head mounted display (OHMD). In some embodiments, the surgical site including live data of the patient, the OHMD, and the virtual data are registered in a common coordinate system. In some embodiments, the virtual data are superimposed onto and aligned with the live data of the patient.”), the method comprising: tracking, using a first tracking device (optical head mounted display (OHMD) 1 of [0147]) from a first line of sight ([0119] describes generation of the projection information onto the OHMD 1 along a viewpoint and view direction), at least a portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and a patient marker that is configured to be placed upon the body of the patient ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”), wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display (OHMD 1 is worn by a surgeon, surgical assistant and/or nurses according to [0147]); tracking, using a second tracking device (OHMD 2 of [0147]) from a second line of sight ([0119] describes generation of the projection information onto the OHMD 2 along a viewpoint and view direction), at least the portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and the patient marker ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”), wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person (OHMD 2 is worn by a surgeon, surgical assistant and/or nurses other than the primary wearer of OHMD 1 according to to [0147]); and using at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]), generating an augmented reality image upon the first head-mounted display ([0147] discloses generating shared digital holographic experience of the surgical scene on the OHMDs), based upon data received from the first tracking device in combination with data received from the second tracking device ([0116]-[0117] describe registering different objects to be displayed (display information) their individual coordinates to a common global coordinate system and [0119] describes projecting the display information onto the display plane using the different viewpoints and view directions and updating the projections with updated viewpoints and view directions. Meaning that the shared holographic experience 30 of [0147] includes a combination of the viewpoints and view directions of the OHMD 1 and OHMD 2. Of note, [0147] discloses additional OHMDs), the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body ([0004] states that “the virtual data are superimposed onto and aligned with the live data of the patient. Unlike virtual reality head systems that blend out live data, the OHMD allows the surgeon to see the live data of the patient, e.g. the surgical field, while at the same time observing virtual data of the patient and/or virtual surgical instruments or implants with a predetermined position and/or orientation using the display of the OHMD unit”). Regarding claim 42, Lang further teaches wherein the tracking, using the first tracking device from the first line of sight, at least the portion of the tool comprises tracking a tool marker ([0071] states that “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”). Regarding claim 43, Lang further teaches wherein the tracking, using the second tracking device from the second line of sight, at least the portion of the tool comprises tracking the tool marker ([0071] states that “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”). Regarding claim 44, Lang further teaches wherein the second head-mounted device includes a second head-mounted display (OHMD 2 of [0147]), and the method further comprises generating a further augmented reality image upon the second head-mounted display ([0147] states that “The OHMD's 11, 12, 13, 14 can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30”). Regarding claim 45, Lang wherein the first tracking device comprises a first camera ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)) configured to image at least the portion of the tool and the patient marker ([0032] states that “one or more cameras or image capture or video capture systems included in the optical head mounted display detect one or more optical markers including their coordinates (x,y,z) and at least one or more of a position, orientation, alignment, direction of movement or speed of movement of the one or more optical markers”). Regarding claim 46, Lang further teaches wherein the second tracking device camera ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)) comprises a second camera configured to image at least the portion of the tool and the patient marker ([0032] states that “one or more cameras or image capture or video capture systems included in the optical head mounted display detect one or more optical markers including their coordinates (x,y,z) and at least one or more of a position, orientation, alignment, direction of movement or speed of movement of the one or more optical markers”). Regarding claim 47, Lang further teaches wherein generating the augmented reality image upon the first head-mounted display comprises: determining a position of the tool with respect to the patient's anatomy using data received from the first tracking device in combination with data received from the second tracking device ([0147], [0253] disclose a spatial mapping registration process, stating that “Live data, e.g. live data of the patient, the position and/or orientation of a physical instrument, the position and/or orientation of an implant component, the position and/or orientation of one or more OHMD's, can be acquired or registered”) ; generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display, based upon the determined position of the tool with respect to the patient's anatomy ([0253] then states “This process creates a three-dimensional mesh describing the surfaces of one or more objects or environmental structures using, for example and without limitation, a depth sensor, laser scanner, structured light sensor, time of flight sensor, infrared sensor, or tracked probe. These devices can generate 3D surface data by collecting, for example, 3D coordinate information or information on the distance from the sensor of one or more surface points on the one or more objects or environmental structures. The 3D surface points can then be connected to 3D surface meshes, resulting in a three-dimensional surface representation of the live data. The surface mesh can then be merged with the virtual data using any of the registration techniques described in the specification”); determining a position of the patient's body with respect to the first head-mounted device based upon data received from the first tracking device ([0146] states that “Once the virtual data and the live data of the patient and the OHMD are registered in the same coordinate system, e.g. using IMUs, optical markers, navigation markers including infrared markers, retroreflective markers, RF markers, and any other registration method described in the specification or known in the art, any change in position of any of the OHMD in relationship to the patient measured in this fashion can be used to move virtual data of the patient in relationship to live data of the patient, so that the visual image of the virtual data of the patient and the live data of the patient seen through the OHDM are always aligned, irrespective of movement of the OHMD and/or the operator's head and/or the operator wearing the OHMD”); and overlaying the virtual image upon the patient's body, based upon the determined position of the patient's body with respect to the first head-mounted device ([0146] further states “The position, orientation, alignment, and change in position, orientation and alignment in relationship to the patient and/or the surgical site of each additional OHMD can be individually monitored thereby maintaining alignment and/or superimposition of corresponding structures in the live data of the patient and the virtual data of the patient for each additional OHMD irrespective of their position, orientation, and/or alignment in relationship to the patient and/or the surgical site”). Regarding claim 48, Lang further teaches wherein determining the position of the tool with respect to the patient's anatomy using data received from the first tracking device in combination with data received from the second tracking device comprises determining an average position of the tool with respect to the patient's anatomy ([1184] states that “the information from two or more cameras can be merged by averaging the 3D coordinates or detected surface points or other geometric structures such as planes or curved surfaces”). Regarding claim 49, Lang further teaches wherein determining the position of the patient's body with respect to the first head-mounted device based upon data received from the first tracking device comprises analyzing one or more images of the patient marker created by a camera of the first tracking device ([0253] states that “Live data, e.g. live data of the patient, the position and/or orientation of a physical instrument, the position and/or orientation of an implant component, the position and/or orientation of one or more OHMD's, can be acquired or registered, for example, using a spatial mapping process. This process creates a three-dimensional mesh describing the surfaces of one or more objects or environmental structures using, for example and without limitation, a depth sensor, laser scanner, structured light sensor, time of flight sensor, infrared sensor, or tracked probe”, and [0270] states that “Image processing and/or pattern recognition of the live data of the patient can then be performed through the OHMD, e.g. using a built in image capture apparatus for capturing the live data of the patient or image and/or video capture systems attached to, integrated with or coupled to the OHMD”). Regarding claim 50, Lang further teaches wherein the one or more images of the patient marker comprise data that is sufficient to determine the position of the patient's body with respect to the first head-mounted device without requiring use of triangulation techniques ([0270] describes image processing of the live data of the patient to detect the patient’s position and orientation ([0253]) does not involve triangulation). Regarding claim 51, Lang teaches a system for use with a tool configured to be placed within a portion of a body of a patient ([0147] states that “Referring to FIG. 1, a system 10 for using multiple OHMD's 11, 12, 13, 14 for multiple viewer's, e.g. a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) is shown”), the system comprising: a patient marker configured to be placed upon the patient's body([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”); a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display(optical head mounted display (OHMD) 1 worn by a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) of [0147]), and a first tracking device ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)) that is configured to track at least a portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and the patient marker ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”) from a first line of sight ([0119] describes generation of the projection information onto the OHMD 1 along a viewpoint and view direction); a second head-mounted device configured to be worn by a second person (OHMD 2 worn by a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) of [0147]) , the second head-mounted device comprising a second tracking device ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)) that is configured to track at least the portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and the patient marker ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”) from a second line of sight ([0119] describes generation of the projection information onto the OHMD 1 along a viewpoint and view direction); and at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]) configured to generate an augmented reality image upon the first head-mounted display ([0147] discloses generating shared digital holographic experience of the surgical scene on the OHMDs), based upon data received from the first tracking device in combination with data received from the second tracking device ([0116]-[0117] describe registering different objects to be displayed (display information) their individual coordinates to a common global coordinate system and [0119] describes projecting the display information onto the display plane using the different viewpoints and view directions and updating the projections with updated viewpoints and view directions. Meaning that the shared holographic experience 30 of [0147] includes a combination of the viewpoints and view directions of the OHMD 1 and OHMD 2. Of note, [0147] discloses additional OHMDs), the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body ([0004] states that “the virtual data are superimposed onto and aligned with the live data of the patient. Unlike virtual reality head systems that blend out live data, the OHMD allows the surgeon to see the live data of the patient, e.g. the surgical field, while at the same time observing virtual data of the patient and/or virtual surgical instruments or implants with a predetermined position and/or orientation using the display of the OHMD unit”). Regarding claim 52, Lang further teaches wherein the tool includes a tool marker, and wherein the first tracking device is configured to track the portion of the tool by tracking the tool marker ([0071] states that “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”). Regarding claim 53, Lang further teaches wherein the second tracking device is configured to track the portion of the tool by tracking the tool marker ([0071] states that “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”). Regarding claim 54, Lang further teaches wherein the second head-mounted device comprises a second head-mounted display (OHMD 2 of [0147]), and wherein the at least one computer processor is further configured to generate a further augmented reality image upon the second head-mounted display ([0147] states that “The OHMD's 11, 12, 13, 14 can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30”). Regarding claim 55, Lang further teaches wherein the first tracking device comprises a first camera ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)). Regarding claim 56, Lang further teaches wherein the second tracking device comprises a second camera ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)). Regarding claim 57, Lang further teaches wherein the at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]) is configured to generate the augmented reality image upon the first head-mounted display ([0147] discloses generating shared digital holographic experience of the surgical scene on the OHMDs) by: determining a position of the tool with respect to the patient's anatomy using data received from the first tracking device in combination with data received from the second tracking device ([0147], [0253] disclose a spatial mapping registration process, stating that “Live data, e.g. live data of the patient, the position and/or orientation of a physical instrument, the position and/or orientation of an implant component, the position and/or orientation of one or more OHMD's, can be acquired or registered”) ; generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display, based upon the determined position of the tool with respect to the patient's anatomy ([0253] then states “This process creates a three-dimensional mesh describing the surfaces of one or more objects or environmental structures using, for example and without limitation, a depth sensor, laser scanner, structured light sensor, time of flight sensor, infrared sensor, or tracked probe. These devices can generate 3D surface data by collecting, for example, 3D coordinate information or information on the distance from the sensor of one or more surface points on the one or more objects or environmental structures. The 3D surface points can then be connected to 3D surface meshes, resulting in a three-dimensional surface representation of the live data. The surface mesh can then be merged with the virtual data using any of the registration techniques described in the specification”); determining a position of the patient's body with respect to the first head-mounted device based upon data received from the first tracking device ([0146] states that “Once the virtual data and the live data of the patient and the OHMD are registered in the same coordinate system, e.g. using IMUs, optical markers, navigation markers including infrared markers, retroreflective markers, RF markers, and any other registration method described in the specification or known in the art, any change in position of any of the OHMD in relationship to the patient measured in this fashion can be used to move virtual data of the patient in relationship to live data of the patient, so that the visual image of the virtual data of the patient and the live data of the patient seen through the OHDM are always aligned, irrespective of movement of the OHMD and/or the operator's head and/or the operator wearing the OHMD”); and overlaying the virtual image upon the patient's body, based upon the determined position of the patient's body with respect to the first head-mounted device ([0146] further states “The position, orientation, alignment, and change in position, orientation and alignment in relationship to the patient and/or the surgical site of each additional OHMD can be individually monitored thereby maintaining alignment and/or superimposition of corresponding structures in the live data of the patient and the virtual data of the patient for each additional OHMD irrespective of their position, orientation, and/or alignment in relationship to the patient and/or the surgical site”). Regarding claim 58, Lang further teaches wherein determining the position of the tool with respect to the patient's anatomy using data received from the first tracking device in combination with data received from the second tracking device comprises determining an average position of the tool with respect to the patient's anatomy ([1184] states that “the information from two or more cameras can be merged by averaging the 3D coordinates or detected surface points or other geometric structures such as planes or curved surfaces”). Regarding claim 59, Lang further teaches wherein determining the position of the patient's body with respect to the first head-mounted device based upon data received from the first tracking device comprises analyzing one or more images of the patient marker created by a camera of the first tracking device ([0253] states that “Live data, e.g. live data of the patient, the position and/or orientation of a physical instrument, the position and/or orientation of an implant component, the position and/or orientation of one or more OHMD's, can be acquired or registered, for example, using a spatial mapping process. This process creates a three-dimensional mesh describing the surfaces of one or more objects or environmental structures using, for example and without limitation, a depth sensor, laser scanner, structured light sensor, time of flight sensor, infrared sensor, or tracked probe”, and [0270] states that “Image processing and/or pattern recognition of the live data of the patient can then be performed through the OHMD, e.g. using a built in image capture apparatus for capturing the live data of the patient or image and/or video capture systems attached to, integrated with or coupled to the OHMD”). Regarding claim 60, Lang further teaches wherein the one or more images of the patient marker comprise data that is sufficient to determine the position of the patient's body with respect to the first head-mounted device without requiring use of triangulation techniques ([0270] describes image processing of the live data of the patient to detect the patient’s position and orientation ([0253]) does not involve triangulation). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Farouk A Bruce whose telephone number is (408)918-7603. The examiner can normally be reached Mon-Fri 8-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAROUK A BRUCE/ Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Dec 12, 2024
Application Filed
Jan 16, 2025
Response after Non-Final Action
Jan 22, 2025
Response after Non-Final Action
Jan 16, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589199
APPARATUS FOR INCREASED DYE FLOW
2y 5m to grant Granted Mar 31, 2026
Patent 12569227
ULTRASOUND BEAMFORMER-BASED CHANNEL DATA COMPRESSION
2y 5m to grant Granted Mar 10, 2026
Patent 12558030
Device for Detecting and Illuminating the Vasculature Using an FPGA
2y 5m to grant Granted Feb 24, 2026
Patent 12551173
SYSTEM AND METHOD FOR SINGLE-SCAN REST-STRESS CARDIAC PET
2y 5m to grant Granted Feb 17, 2026
Patent 12521053
Methods and Devices for Electromagnetic Measurements from Ear Cavity
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
84%
With Interview (+37.2%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 200 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month