DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claims 36-39 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-22, 24-26, 30, 32, 35, and 40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 11 and 14 of U.S. Patent No. 11766296. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
U.S. Patent No. 11766296
21. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, with a first tracking device associated with a first line of sight, at least a portion of the tool and a patient marker that is placed upon the body of the patient;
tracking, with a second tracking device associated with a second line of sight, the at least the portion of the tool and the patient marker; and
using at least one computer processor:
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
1. A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking the tool and a patient marker that is placed upon the patient's body from a first line of sight, using a first tracking device that is disposed upon a first head-mounted device that is worn by a first person, the first head-mounted device including a first head-mounted display;
tracking the tool and the patient marker, from a second line of sight, using a second tracking device; and using at least one computer processor for:
generating an augmented reality image upon the first head-mounted display based upon data received from the first tracking device and without using data from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body;
in response to detecting that the first tracking device no longer has both the patient marker and the tool within the first line of sight,
generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image
22. (New) The method of Claim 21, wherein tracking the at least the portion of the tool and the patient marker with the second tracking device comprises tracking the at least the portion of the tool and the patient marker with the second tracking device while the second tracking device is disposed in a stationary position.
3. The method according to claim 1, wherein tracking the tool and the patient marker, from the second line of sight, using the second tracking device, comprises tracking the tool and the patient marker from the second line of sight, using the second tracking device disposed in a stationary position.
24. (New) The method of Claim 21, further comprising, using the at least one computer processor, generating a further augmented reality image upon a second display.
6. The method according to claim 1, wherein the second tracking device is disposed upon a second head-mounted device that is worn by a second person.
7. The method according to claim 6, further comprising generating a further augmented-reality image upon a second head-mounted display of the second head-mounted device.
25. (New) The method of Claim 21, wherein an anatomical portion of the patient is visible through a portion of the first display.
10. The method according to claim 1, further comprising generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display when at least the patient marker or the tool is within the first line of sight such that the virtual image of the tool and anatomy of the patient is aligned with actual patient anatomy based upon data received from the first tracking device.
26. (New) The method of Claim 21, wherein generating the augmented reality image upon the first display comprises: upon determining that the first line of sight between the first tracking device and the patient marker is at least partially blocked, filling substantially a whole of the first display with the augmented reality image based on data from the second tracking device.
1. … in response to detecting that the first tracking device no longer has both the patient marker and the tool within the first line of sight, generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image…
30. (New) The method of Claim 21, wherein the augmented reality image includes a virtual image of the tool and anatomy of the patient, overlaid upon the body of the patient.
1. … the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body…
32. (New) The method of Claim 21, wherein the portion of the tool comprises a tool marker.
2. The method according to claim 1, wherein tracking the tool comprises tracking a tool marker.
35. (New) The method of Claim 21, wherein generating the augmented reality image at least partially based upon data received from the second tracking device comprises, in response to the at least the portion of the tool and the patient marker both not being within the first line of sight, using the at least one computer processor: determining a position of the tool with respect to an anatomy of the patient using data received from the second tracking device; and generating a virtual image of the tool and the anatomy of the patient upon the first display, based upon the position of the tool with respect to the anatomy of the patient.
1 … in response to detecting that the first tracking device no longer has both the patient marker and the tool within the first line of sight, generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image…determining a position of the tool with respect to the anatomy of the patient using data received from the second tracking device; and generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display, based upon the determined position of the tool with respect to the anatomy of the patient.
36. (New) The method of Claim 21, wherein generating the augmented reality image at least partially based upon data received from the second tracking device comprises, in response to the at least the portion of the tool being within the first line of sight, and at least a portion of the patient marker not being within the first line of sight, using the at least one computer processor: determining a position of the tool with respect to an anatomy of the patient using data received from the second tracking device; and generating a virtual image of the tool and the anatomy of the patient upon the first display, based upon the position of the tool with respect to the anatomy of the patient.
1. …in response to detecting a portion of the tool being within the first line of sight, and a portion of the patient marker not being within the first line of sight: determining a position of the tool with respect to the anatomy of the patient using data received from the second tracking device; and generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display, based upon the determined position of the tool with respect to the anatomy of the patient.
21. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, with a first tracking device associated with a first line of sight, at least a portion of the tool and a patient marker that is placed upon the body of the patient;
tracking, with a second tracking device associated with a second line of sight, the at least the portion of the tool and the patient marker; and
using at least one computer processor:
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
11. A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking the tool and a patient marker that is placed upon the patient's body from a first line of sight, using a first tracking device that is disposed upon a first head-mounted device that is worn by a first person, the first head-mounted device including a first head-mounted display;
tracking the tool and the patient marker, from a second line of sight, using a second tracking device; and using at least one computer processor for:
generating an augmented reality image upon the first head-mounted display based upon data received from the first tracking device and without using data from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body; in response to detecting that the first tracking device no longer has both the patient marker and the tool within the first line of sight, generating the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image; .
40. (New) A system tracking a tool configured to be placed within a portion of a body of a patient, the system comprising:
a first tracking device associated with a first line of sight, the first tracking device configured to track at least a portion of the tool and a patient marker that is placed upon the body of the patient;
a second tracking device associated with a second line of sight, the second tracking device configured to track at least the portion of the tool and the patient marker; and
at least one computer processor configured to: when the at least the portion of the tool and the patient marker are both within the first line of sight, generate an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generate the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
14. Apparatus for use with a tool configured to be placed within a portion of a body of a patient, the apparatus comprising:
a first head-mounted device comprising a first head-mounted display, and a first tracking device that is configured to track the tool and a patient marker from a first line of sight, wherein the patient marker is configured to be placed upon the patient's body;
a second tracking device that is configured to track the tool and the patient marker from a second line of sight; and
at least one computer processor configured: to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device and without using data from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body;
in response to detecting that the first tracking device no longer has both the patient marker and the tool within the first line of sight, to generate the virtual image of the tool and anatomy of the patient upon the first head-mounted display from the first line of sight of the first tracking device worn by the first person, by incorporating data received from the second tracking device with respect to a position of the tool in the body into the virtual image;
Claims 21-25, 28, 32, 34, and 36-37 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 4-5, 8-9, 14, and 16-17 of U.S. Patent No. 11980429. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
U.S. Patent No. 11980429
21. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, with a first tracking device associated with a first line of sight, at least a portion of the tool and a patient marker that is placed upon the body of the patient;
tracking, with a second tracking device associated with a second line of sight, the at least the portion of the tool and the patient marker; and
using at least one computer processor:
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
1. A method of generating images for augmented reality surgery using multiple tracking devices, the method comprising:
determining a first position of a tool with respect to an anatomy of a patient using data from tracking of both a patient marker and the tool by a first tracking device of a first head-mounted device,
determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera;
generating, by at least one computer processor using the determined first position of the tool with respect to the anatomy of the patient, a first augmented reality image for display by the first head-mounted display, wherein the first augmented reality image comprises a virtual image of the tool aligned with a virtual image of the anatomy of the patient;
detecting at least one of (i) a line of sight between the first tracking device and the patient marker is blocked and a line of sight between the first tracking device and the tool is not blocked, or (ii) a line of sight between the first tracking device and the tool is blocked and a line of sight between the first tracking device and the patient marker is not blocked; in response to the detecting at least one of (i) the line of sight between the first tracking device and the patient marker is blocked and the line of sight between the first tracking device and the tool is not blocked, or (ii) the line of sight between the first tracking device and the tool is blocked and the line of sight between the first tracking device and the patient marker is not blocked,
a second augmented reality image for display by the first head-mounted display, wherein the second augmented reality image comprises a virtual image of the tool aligned with a virtual image of the anatomy of the patient.
22. (New) The method of Claim 21, wherein tracking the at least the portion of the tool and the patient marker with the second tracking device comprises tracking the at least the portion of the tool and the patient marker with the second tracking device while the second tracking device is disposed in a stationary position.
4. The method of claim 1, wherein the second tracking device is disposed in a stationary position.
23. (New) The method of Claim 21, wherein the first tracking device comprises a first camera, and wherein the second tracking device comprises a second camera.
1. …wherein the first tracking device comprises a first camera;… a second tracking device that comprises a second camera…
24. (New) The method of Claim 21, further comprising, using the at least one computer processor, generating a further augmented reality image upon a second display.
2. The method of claim 1, wherein the second tracking device is part of a second head-mounted device, and wherein the second head-mounted device comprises a second head-mounted display.
25. (New) The method of Claim 21, wherein an anatomical portion of the patient is visible through a portion of the first display.
9. The method of claim 1, further comprising displaying the first augmented reality image by the first head-mounted display, with the virtual image of the anatomy of the patient of the first augmented reality image being aligned with actual patient anatomy based upon data from tracking of at least the patient marker by the first tracking device.
28. (New) The method of Claim 21, wherein the at least one computer processor is disposed externally to the first tracking device.
8. The method of claim 1, wherein the at least one computer processor is disposed externally to the first head-mounted device.
32. (New) The method of Claim 21, wherein the portion of the tool comprises a tool marker.
5. The method of claim 1, wherein the tracking of the tool comprises tracking a tool marker.
34. (New) The method of Claim 21, further comprising, in response to detecting that the first tracking device has lost its first line of sight of the tool, determining a location of the tool relative to the body of the patient using data received from the second tracking device.
1. …determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera…
36. (New) The method of Claim 21, wherein generating the augmented reality image at least partially based upon data received from the second tracking device comprises, in response to the at least the portion of the tool being within the first line of sight, and at least a portion of the patient marker not being within the first line of sight, using the at least one computer processor: determining a position of the tool with respect to an anatomy of the patient using data received from the second tracking device; and generating a virtual image of the tool and the anatomy of the patient upon the first display, based upon the position of the tool with respect to the anatomy of the patient.
in response to the detecting at least one of (i) the line of sight between the first tracking device and the patient marker is blocked and the line of sight between the first tracking device and the tool is not blocked, or (ii) the line of sight between the first tracking device and the tool is blocked and the line of sight between the first tracking device and the patient marker is not blocked, determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera
37. (New) The method of Claim 21, wherein generating the augmented reality image at least partially based upon data received from the second tracking device comprises, in response to at least a portion of the patient marker being within the first line of sight, and the at least the portion of the tool not being within the first line of sight, using the at least one computer processor: determining a position of the tool with respect to an anatomy of the patient using data received from the second tracking device; generating a virtual image of the tool and the anatomy of the patient upon the first display, based upon the position of the tool with respect to the anatomy of the patient; determining a position of the body of the patient with respect to the first tracking device, based upon data received from the first tracking device; and overlaying the virtual image of the tool and the anatomy of the patient upon the body of the patient, based upon the position of the body of the patient with respect to the first tracking device.
in response to the detecting at least one of (i) the line of sight between the first tracking device and the patient marker is blocked and the line of sight between the first tracking device and the tool is not blocked, or (ii) the line of sight between the first tracking device and the tool is blocked and the line of sight between the first tracking device and the patient marker is not blocked, determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device that comprises a second camera
21. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, with a first tracking device associated with a first line of sight, at least a portion of the tool and a patient marker that is placed upon the body of the patient;
tracking, with a second tracking device associated with a second line of sight, the at least the portion of the tool and the patient marker; and
using at least one computer processor:
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
14. A method of generating images for augmented reality surgery using multiple tracking devices, the method comprising:
determining a first position of a first head-mounted device with respect to anatomy of a patient, and a first position of a tool with respect to the anatomy of the patient, using data from tracking of a patient marker and the tool by a first tracking device of the first head-mounted device,
generating, by at least one computer processor, for display by the first head-mounted display, a first augmented reality image, the first augmented reality image comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, and the first augmented reality image being aligned with actual patient anatomy based upon the determined first position of the first head-mounted device;
detecting that a line of sight between the first tracking device and the patient marker is blocked and a line of sight between the first tracking device and the tool is not blocked; in response to the detecting that the line of sight between the first tracking device and the patient marker is blocked and the line of sight between the first tracking device and the tool is not blocked, determining a second position of the tool with respect to the anatomy of the patient using data from a second tracking device; and generating, by the at least one computer processor, for display by the first head-mounted display, a second augmented reality image, the second augmented reality image comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, and the second augmented reality image being aligned with actual patient anatomy.
22. (New) The method of Claim 21, wherein tracking the at least the portion of the tool and the patient marker with the second tracking device comprises tracking the at least the portion of the tool and the patient marker with the second tracking device while the second tracking device is disposed in a stationary position.
16. The method of claim 14, wherein the second tracking device is disposed in a stationary position.
32. (New) The method of Claim 21, wherein the portion of the tool comprises a tool marker.
17. The method of claim 14, wherein the tracking of the tool comprises tracking a tool marker.
Claim 40 is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 13 of U.S. Patent No. 12201384. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
U.S. Patent No. 12201384
40. (New) A system tracking a tool configured to be placed within a portion of a body of a patient, the system comprising:
a first tracking device associated with a first line of sight, the first tracking device configured to track at least a portion of the tool and a patient marker that is placed upon the body of the patient;
a second tracking device associated with a second line of sight, the second tracking device configured to track at least the portion of the tool and the patient marker; and
at least one computer processor configured to: when the at least the portion of the tool and the patient marker are both within the first line of sight, generate an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generate the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
1. A tracking system for image-guided surgery, the tracking system comprising:
a first head-mounted device to be worn by a surgeon performing a surgical procedure, the first head-mounted device comprising a first display and a first tracking device that comprises a first camera;
a second tracking device that is separate from the first head-mounted device, the second tracking device comprising a second camera, wherein the first camera and the second camera are each configured to capture images of at least a portion of a tool used in the surgical procedure;
and at least one computer processor configured to: track a location and orientation of the tool with respect to the patient marker using data from the first tracking device; generate for display by the first display, using the tracked location and orientation of the tool, augmented reality images comprising a virtual image of the tool aligned with a virtual image of anatomy of a patient;
detect that at least one of (i) a line of sight between the first tracking device and the patient marker is blocked and a line of sight between the first tracking device and the tool is not blocked, or (ii) a line of sight between the first tracking device and the tool is blocked and a line of sight between the first tracking device and the patient marker is not blocked; and in response to the detection: track the location and orientation of the tool with respect to the patient marker using data from the second tracking device; and generate for display by the first display, using the tracked location and orientation of the tool using the data from the second tracking device, augmented reality images comprising a virtual image of the tool aligned with a virtual image of anatomy of a patient.
40. (New) A system tracking a tool configured to be placed within a portion of a body of a patient, the system comprising:
a first tracking device associated with a first line of sight, the first tracking device configured to track at least a portion of the tool and a patient marker that is placed upon the body of the patient;
a second tracking device associated with a second line of sight, the second tracking device configured to track at least the portion of the tool and the patient marker; and
at least one computer processor configured to: when the at least the portion of the tool and the patient marker are both within the first line of sight, generate an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
when the at least the portion of the tool and the patient marker are not both within the first line of sight, generate the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
13. A tracking system for image-guided surgery, the tracking system comprising:
a first head-mounted device to be worn by a surgeon performing a surgical procedure, the first head-mounted device comprising a first display and a first tracking device that comprises a first camera;
a second tracking device that is separate from the first head-mounted device, the second tracking device comprising a second camera, wherein the first camera and the second camera are each configured to capture images of at least a portion of a tool used in the surgical procedure; and
at least one computer processor configured to: track a location and orientation of the first head-mounted device with respect to the patient marker using data from the first tracking device; track a location and orientation of the tool with respect to the patient marker using data from the first tracking device; generate, for display by the first display, augmented reality images comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, the augmented reality images being aligned with actual patient anatomy based upon the tracking of the location and orientation of the first head-mounted device with respect to the patient marker;
detect that the line of sight between the first tracking device and the patient marker is blocked and the line of sight between the first tracking device and the tool is not blocked; and in response to the detection: track the location and orientation of the tool with respect to the patient marker using data from the second tracking device; and generate, for display by the first display, augmented reality images comprising a virtual image of the tool aligned with a virtual image of the anatomy of the patient, the augmented reality images being aligned with actual patient anatomy.
Claims 21, 23-24, 30, 32, and 40 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 41-42, 44-45, 51 of Copending Application No. 18978615 (reference application) in view of Lang, P. K., US 20170258526 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Instant Application
Copending Application No. 18978615
21. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, with a first tracking device associated with a first line of sight, at least a portion of the tool and a patient marker that is placed upon the body of the patient;
tracking, with a second tracking device associated with a second line of sight, the at least the portion of the tool and the patient marker; and
using at least one computer processor:
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device; and
41. (New) A method for use with a tool configured to be placed within a portion of a body of a patient, the method comprising:
tracking, using a first tracking device from a first line of sight, at least a portion of the tool and a patient marker that is configured to be placed upon the body of the patient, wherein the first tracking device is disposed upon a first head-mounted device that is configured to be worn by a first person, the first head-mounted device including a first head-mounted display;
tracking, using a second tracking device from a second line of sight, at least the portion of the tool and the patient marker, wherein the second tracking device is disposed upon a second head-mounted device that is configured to be worn by a second person; and
using at least one computer processor, generating an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body.
Copending Application No. 18978615 does not teach when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
However, within the same field of endeavor, Lang teaches devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display (see abstract), when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that irrespective of the various individual views and perspective).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Copending Application No. 18978615, wherein when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device, as taught by Lang, for accurate registration of data, [0228], and improved tracking of changes in positions of objects within the surgical field, [0229], with a reasonable expectation of success, as Copending Application No. 18978615 is also tasked with accurately representing positions of objects within the surgical field (page 1, lines 25-27 and page 2, lines 1-2 of the originally filed specification dated 12/12/2024).
23. (New) The method of Claim 21, wherein the first tracking device comprises a first camera, and wherein the second tracking device comprises a second camera.
45. (New) The method of Claim 41, wherein the first tracking device comprises a first camera configured to image at least the portion of the tool and the patient marker.
46. (New) The method of Claim 45, wherein the second tracking device comprises a second camera configured to image at least the portion of the tool and the patient marker.
24. (New) The method of Claim 21, further comprising, using the at least one computer processor, generating a further augmented reality image upon a second display.
44. (New) The method of Claim 41, wherein the second head-mounted device includes a second head-mounted display, and the method further comprises generating a further augmented reality image upon the second head-mounted display.
30. (New) The method of Claim 21, wherein the augmented reality image includes a virtual image of the tool and anatomy of the patient, overlaid upon the body of the patient.
41. … the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body.
32. (New) The method of Claim 21, wherein the portion of the tool comprises a tool marker.
42. (New) The method of Claim 41, wherein the tracking, using the first tracking device from the first line of sight, at least the portion of the tool comprises tracking a tool marker.
40. (New) A system tracking a tool configured to be placed within a portion of a body of a patient, the system comprising:
a first tracking device associated with a first line of sight, the first tracking device configured to track at least a portion of the tool and a patient marker that is placed upon the body of the patient;
a second tracking device associated with a second line of sight, the second tracking device configured to track at least the portion of the tool and the patient marker; and
at least one computer processor configured to: when the at least the portion of the tool and the patient marker are both within the first line of sight, generate an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device;
51. (New) A system for use with a tool configured to be placed within a portion of a body of a patient, the system comprising:
a first head-mounted device configured to be worn by a first person, the first head- mounted device comprising a first head-mounted display, and a first tracking device that is configured to track at least a portion of the tool and the patient marker from a first line of sight;
a second head-mounted device configured to be worn by a second person, the second head-mounted device comprising a second tracking device that is configured to track at least the portion of the tool and the patient marker from a second line of sight; and
at least one computer processor configured to generate an augmented reality image upon the first head-mounted display, based upon data received from the first tracking device in combination with data received from the second tracking device, the augmented reality image including (a) a virtual image of the tool and anatomy of the patient, overlaid upon (b) the patient's body.
Copending Application No. 18978615 does not teach when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device.
However, within the same field of endeavor, Lang teaches devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display (see abstract), when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that irrespective of the various individual views and perspective).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Copending Application No. 18978615, wherein when the at least the portion of the tool and the patient marker are not both within the first line of sight, generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device, as taught by Lang, for accurate registration of data, [0228], and improved tracking of changes in positions of objects within the surgical field, [0229], with a reasonable expectation of success, as Copending Application No. 18978615 is also tasked with accurately representing positions of objects within the surgical field (page 1, lines 25-27 and page 2, lines 1-2 of the originally filed specification dated 12/12/2024).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-30, 32-35, and 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lang, P. K., US 20170258526 A1.
Regarding claim 21, Lang teaches a method for use with a tool configured to be placed within a portion of a body of a patient ([0004] states that “Aspects of the invention provides, among other things, for a simultaneous visualization of live data of the patient, e.g. a patient's spine or joint, and digital representations of virtual data such as virtual cuts and/or virtual surgical guides including cut blocks or drilling guides through an optical head mounted display (OHMD). In some embodiments, the surgical site including live data of the patient, the OHMD, and the virtual data are registered in a common coordinate system. In some embodiments, the virtual data are superimposed onto and aligned with the live data of the patient.”),
the method comprising:
tracking, with a first tracking device (optical head mounted display (OHMD) 1 of [0147]) associated with a first line of sight ([0119] describes generation of the projection information onto the OHMD 1 along a viewpoint and view direction), at least a portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and a patient marker that is placed upon the body of the patient ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”);
tracking, with a second tracking device (OHMD 2 of [0147]) associated with a second line of sight ([0119] describes generation of the projection information onto the OHMD 2 along a viewpoint and view direction), the at least the portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and the patient marker([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”); and
using at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]):
when the at least a portion of the tool and the patient marker are both within the first line of sight, generating an augmented reality image upon a first display based upon data received from the first tracking device and without using data from the second tracking device ([0146] discloses that “when multiple OHMD's are used, e.g. one for the primary surgeon and additional ones, e.g. two, three, four or more, for other surgeons, assistants, residents, fellows, nurses and/or visitors, the OHMD's worn by the other staff, not the primary surgeon, will also display the virtual representation(s) of the virtual data of the patient aligned with the corresponding live data of the patient seen through the OHMD, wherein the perspective of the virtual data that is with the patient and/or the surgical site for the location, position, and/or orientation of the viewer's eyes for each of the OHMD's used and each viewer”, meaning each individual viewer is provided their own augmented reality view and perspective of the surgical site without the augmented reality view and perspective of the surgical from other viewers); and
when the at least the portion of the tool and the patient marker are not both within the first line of sight ([0119] discloses “As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view”, where a viewpoint and view direction change is tantamount to the tool and patient marker not being within the line of sight), generating the augmented reality image upon the first display, at least partially based upon data received from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that for a viewer who’s head or body moves, a view or perspective of the surgical plan, from the perspective of the other viewers (projection of digital holograms from each OHMD according to [0147]), is projected onto the viewer’s view who’s head or body has moved).
Regarding claim 22, Lang further teaches wherein tracking the at least the portion of the tool and the patient marker with the second tracking device comprises tracking the at least the portion of the tool and the patient marker with the second tracking device while the second tracking device is disposed in a stationary position ([0212] states that “For purposes of registration of virtual data and live data, the OHMD can be optionally placed in a fixed position, e.g. mounted on a stand or on a tripod. While the OHMD is placed in the fixed position, live data can be viewed by the surgeon and they can be, optionally recorded with a camera and/or displayed on a monitor. Virtual data can then be superimposed and the matching and registration of virtual data and live data can be performed. At this point, the surgeon or an operator can remove OHMD from the fixed position and the surgeon can wear the OHMD during the surgical procedure”).
Regarding claim 23, Lang further teaches wherein the first tracking device comprises a first camera, and wherein the second tracking device comprises a second camera ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)).
Regarding claim 24, Lang further teaches using the at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]), generating a further augmented reality image upon a second display ([0147] states that “The OHMD's 11, 12, 13, 14 can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30”).
Regarding claim 25, Lang wherein an anatomical portion of the patient is visible through a portion of the first display ([0004] discloses “Aspects of the invention provides, among other things, for a simultaneous visualization of live data of the patient, e.g. a patient's spine or joint, and digital representations of virtual data such as virtual cuts and/or virtual surgical guides including cut blocks or drilling guides through an optical head mounted display (OHMD)”).
Regarding claim 26, Lang further teaches wherein generating the augmented reality image upon the first display comprises: upon determining that the first line of sight between the first tracking device and the patient marker is at least partially blocked ([0119] discloses “As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view”, where a viewpoint and view direction change is tantamount to the tool and patient marker not being within the line of sight), filling substantially a whole of the first display with the augmented reality image based on data from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that for a viewer who’s head or body moves, a view or perspective of the surgical plan, from the perspective of the other viewers (projection of digital holograms from each OHMD according to [0147]), is projected onto the viewer’s view who’s head or body has moved)..
Regarding claim 27, Lang further teaches when the first tracking device has lost its first line of sight with a portion of the patient marker, using the at least one computer processor to continue to track a location of the first tracking device with respect to an anatomy of the patient ([0228] discloses tracking a patient marker using techniques known in the art) by, at least in part, tracking the patient marker using a tracking algorithm ([0110], [0206]-[0207], [0255] all describe software applications for tracking).
Regarding claim 28, Lang further teaches wherein the at least one computer processor is disposed externally to the first tracking device ([0080] discloses a computer or server or workstation that transmits data to the OHMD, at least suggesting that the computer or server or workstation is external to the OHMD).
Regarding claim 29, Lang further teaches wherein generating the augmented reality image comprises, in response to detecting that the first tracking device has lost its first line of sight with the patient marker, using the at least one computer processor, generating an image of a virtual tool within a virtual anatomy of the patient without regard to aligning the image with actual patient anatomy ([0223] describes “locking” the virtual data with respect to the surgeon’s head, where head movements does not move the virtual data, and [0224] discloses moving the virtual data with respect to the surgeon’s movements. In both scenarios, there is no alignment with the live data of the patient).
Regarding claim 30, Lang further teaches wherein the augmented reality image includes a virtual image of the tool and anatomy of the patient, overlaid upon the body of the patient ([0004] states that “the virtual data are superimposed onto and aligned with the live data of the patient. Unlike virtual reality head systems that blend out live data, the OHMD allows the surgeon to see the live data of the patient, e.g. the surgical field, while at the same time observing virtual data of the patient and/or virtual surgical instruments or implants with a predetermined position and/or orientation using the display of the OHMD unit”).
Regarding claim 32, Lang further teaches wherein the portion of the tool comprises a tool marker ([0071] states that “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”).
Regarding claim 33, Lang further teaches irradiating the patient marker or the tool marker with a light source ([0397] discloses that “The tracker or pointing device can also be a light source, which can, for example, create a red point or green point created by a laser on the patient's tissue highlighting the anatomic landmark intended to be used for registration. A light source can be chosen that has an intensity and/or a color that will readily distinguish it from the live tissue of the patient”).
Regarding claim 34, Lang further teaches in response to detecting that the first tracking device has lost its first line of sight of the tool ([0119] discloses “As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view”, where a viewpoint and view direction change is tantamount to detecting that a line of sight of the tool is lost), determining a location of the tool relative to the body of the patient ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) using data received from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that for a viewer who’s head or body moves, a view or perspective of the surgical plan, from the perspective of the other viewers (projection of digital holograms from each OHMD according to [0147]), is projected onto the viewer’s view who’s head or body has moved).
Regarding claim 35, Lang further teaches wherein generating the augmented reality image at least partially based upon data received from the second tracking device comprises, in response to the at least the portion of the tool and the patient marker both not being within the first line of sight ([0119] discloses “As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view”, where a viewpoint and view direction change is tantamount to the tool and patient marker not being within the line of sight), using the at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]):
determining a position of the tool with respect to an anatomy of the patient using data received from the second tracking device ([0147], [0253] disclose a spatial mapping registration process, stating that “Live data, e.g. live data of the patient, the position and/or orientation of a physical instrument, the position and/or orientation of an implant component, the position and/or orientation of one or more OHMD's, can be acquired or registered”); and
generating a virtual image of the tool and the anatomy of the patient upon the first display, based upon the position of the tool with respect to the anatomy of the patient ([0253] then states “This process creates a three-dimensional mesh describing the surfaces of one or more objects or environmental structures using, for example and without limitation, a depth sensor, laser scanner, structured light sensor, time of flight sensor, infrared sensor, or tracked probe. These devices can generate 3D surface data by collecting, for example, 3D coordinate information or information on the distance from the sensor of one or more surface points on the one or more objects or environmental structures. The 3D surface points can then be connected to 3D surface meshes, resulting in a three-dimensional surface representation of the live data. The surface mesh can then be merged with the virtual data using any of the registration techniques described in the specification”).
Regarding claim 40, Lang teaches a system tracking a tool configured to be placed within a portion of a body of a patient ([0147] states that “Referring to FIG. 1, a system 10 for using multiple OHMD's 11, 12, 13, 14 for multiple viewer's, e.g. a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) is shown”), the system comprising:
a first tracking device ([0103], [0104] disclose that the exemplary OHMD (Microsoft Hololens includes several cameras (two on each side)) associated with a first line of sight ([0119] describes generation of the projection information onto the OHMD 1 along a viewpoint and view direction), the first tracking device configured to track at least a portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and a patient marker that is placed upon the body of the patient ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”);
a second tracking device (OHMD 2 of [0147]) associated with a second line of sight ([0119] describes generation of the projection information onto the OHMD 2 along a viewpoint and view direction), the second tracking device configured to track at least the portion of the tool ([0071] states “With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers”) and the patient marker ([0032] states that “the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof”); and
at least one computer processor (processors of [0103] and computer graphics system of [0116],[0117]) configured to:
when the at least the portion of the tool and the patient marker are both within the first line of sight, generate an augmented reality image upon a first display ([0147] discloses generating shared digital holographic experience of the surgical scene on the OHMDs) based upon data received from the first tracking device and without using data from the second tracking device ([0146] discloses that “when multiple OHMD's are used, e.g. one for the primary surgeon and additional ones, e.g. two, three, four or more, for other surgeons, assistants, residents, fellows, nurses and/or visitors, the OHMD's worn by the other staff, not the primary surgeon, will also display the virtual representation(s) of the virtual data of the patient aligned with the corresponding live data of the patient seen through the OHMD, wherein the perspective of the virtual data that is with the patient and/or the surgical site for the location, position, and/or orientation of the viewer's eyes for each of the OHMD's used and each viewer”, meaning each individual viewer is provided their own augmented reality view and perspective of the surgical site without the augmented reality view and perspective of the surgical from other viewers); and
when the at least the portion of the tool and the patient marker are not both within the first line of sight ([0119] discloses “As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view”, where a viewpoint and view direction change is tantamount to the tool and patient marker not being within the line of sight), generate the augmented reality image upon the first display, at least partially based upon data received from the second tracking device ([0148] states that “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual OHMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each OHMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”, meaning that for a viewer who’s head or body moves, a view or perspective of the surgical plan, from the perspective of the other viewers (projection of digital holograms from each OHMD according to [0147]), is projected onto the viewer’s view who’s head or body has moved).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Lang, P. K., US 20170258526 A1 in view of Crawford, et al., US 20130345718 A1.
Regarding claim 31, Lang teaches all the limitations of claim 30 above.
Lang wherein generating the augmented reality image comprises, in the augmented reality image, positioning the virtual image of the anatomy of the patient based on data from tracking of the patient marker by the first tracking device at a time when the first line of sight between the first tracking device and the patient marker was not blocked.
However, within the same field of endeavor, Crawford teaches methods for reconstructing markers in a surgical scene using multiple ambiguous synchronized lines of sight via multiple cameras 8200 tracking the same markers 720 ([0386]), with [0386] further stating that “when one line of sight is obscured, the lines of sight from other cameras 8200 (where markers 720 can still be viewed) could be used to track the robot 15 and targeting fixture 690. In some embodiments, to mitigate twitching movements of the robot 15 when one line of sight is lost, it is possible that the marker 720 positions from the obscured line of sight could be reconstructed using methods as previously described based on an assumed fixed relationship between the last stored positions of the markers 720 relative to the unobstructed lines of sight. Further, in some embodiments, at every frame, the position of a marker 720 from camera 1 relative to its position from camera 2 would be stored; then if camera 1 is obstructed, and until the line of sight is restored, this relative position is recalled from computer memory (for example in memory of a computer platform 3400) and a reconstruction of the marker 720 from camera 1 would be inserted based on the recorded position of the marker from camera 2. In some embodiments, the method could compensate for temporary obstructions of line of sight such as a person standing or walking in front of one camera unit”, hence teaching the limitation wherein generating the augmented reality image comprises, in the augmented reality image, positioning the virtual image of the anatomy of the patient based on data from tracking of the patient marker by the first tracking device at a time when the first line of sight between the first tracking device and the patient marker was not blocked.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Lang wherein generating the augmented reality image comprises, in the augmented reality image, positioning the virtual image of the anatomy of the patient based on data from tracking of the patient marker by the first tracking device at a time when the first line of sight between the first tracking device and the patient marker was not blocked, as taught by Crawford, to compensate for temporary obstructions of line of sight such as a person standing or walking in front of one camera unit, [0386], and hence improve the accuracy of tracking objects within the surgical scene, [0012], with a reasonable expectation of success, as Lang is also concerned with accurate registration of data, [0228], and improved tracking of changes in positions of objects within the surgical field, [0229].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Farouk A Bruce whose telephone number is (408)918-7603. The examiner can normally be reached Mon-Fri 8-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FAROUK A BRUCE/ Examiner, Art Unit 3797