DETAILED ACTION
This Office action for U.S. Patent Application No. 18/616,706 is responsive to communications filed 9 October 2025, in reply to the Non-Final Rejection of 16 June 2025.
Claims 1–21 are pending, of which claims 17–21 are new.
In the previous Office action, claims 1–3, 6–11, and 14–16 were rejected under 35 U.S.C. § 1.75(i) for formatting errors. Claims 8, 10, and 11 were objected to for various errors. Claims 1 and 3 were rejected under 35 U.S.C. § 112(b) for antecedent errors. Claims 1–16 were rejected under 35 U.S.C. § 103(a) as obvious over U.S. Patent Application Publication No. 2018/0296853 A1 (“Zeitouny”).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments to the claims have been considered. All claim objections are withdrawn. The rejections of claims 1 and 3 under 35 U.S.C. § 112(b) are withdrawn. However, a new rejection of claims 1, 6, 9, and 14 is made under 35 U.S.C. § 112(b).
Response to Arguments
Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. It is respectfully submitted that US 6,006,126 A (“Cosman”) overcomes the alleged deficiencies of Zeitouny.
Claim Rejections - 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. § 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1, 6, 9, and 14 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 1 recites two distinct first and second trackers, which would naturally lead one of ordinary skill in the art to believe these are two different trackers. M.P.E.P. § 2111.01(I) (“plain meaning” canon of claim interpretation). But surprise! Claim 6 recites that the first and second trackers are the same tracker! Applicant is requested to amend at least one of claims 1 or 6 so that claim 1 unambiguously claims one or two trackers and claim 6 unambiguously limits the claim 1 tracker set to the one tracker shared by both cameras to preserve the apparent full intent of the claims. Claims 9 and 14 repeat this ambiguous phrasing.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1–21 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2018/0296853 A1 (“Zeitouny”) in view of U.S. Patent No. 6,006,126 A (“Cosman”).
Zeitouny, directed to dermatology, teaches with respect to claim 1 a system for performing a navigated surgical procedure on a patient comprising:
a first camera with a first field of view for aligning (¶ 0105, further camera 160 that produces image data Id2 to determine orientation of skin imaging camera 110) relative to a first surgical location (¶ 0074, treatment location), and
configured to generate image data of at least one first tracker (id., tracking indications), and
configured to be positioned in a first positional relationship with the patient (¶ 0105, orientation with respect to skin S);
a second camera with a second field of view (id., skin imaging camera 110) for aligning relative (id., orientation of skin imaging camera 110 with respect to skin S) to a second surgical location (¶ 0074, next treatment location) and
configured to generate image data of at least one second tracker (id., treatment locations); . . .
a computer storage device storing instructions (¶ 0022, storage medium), which when executed by a processor of a computing device (id., general purpose device), configure the computing device to:
provide a user interface to guide the user to position the second camera in a target second positional relationship (¶¶ 0080, guiding user over skin surface during treatment); . . . and
perform the navigated surgical procedure using the real-time tracking information (¶¶ 0074–75, path for treatment areas).
The claimed invention differs from Zeitouny first in the nature of the trackers and second in the relationship between the two cameras. However, Cosman, directed to medical imaging, teaches with respect to claim 1:
wherein the at least one first tracker and the at least one second tracker each comprise a plurality of optically detectable targets comprising reflective markers or spheres (15:49–52, Figs. 8–9, use of “actual physical index markers”) . . .
an overall field of view comprising the first field of view and the second field of view (7:45–59, 14:41–15:15, space coordinates relative to the fields of both cameras; 9:7–31, computer-rendered visualization of the field produced from both cameras)
to generate real-time tracking information in the overall field of view (7:45–59, using index points to relate movement of the patient relative to the cameras in real time; 9:26–45, computer graphics assisted tracking of entire field in real time),
the real-time tracking information comprising tracker pose data in up to 6 degrees of freedom (14:64–15:6, X, Y, and Z coordinates; 10:13–15, rotation of image data for registering the cameras’ field of view relative to the anatomy; 11:40–46, 15:16–20, 1657–17:5; rotating and translating computer graphics for registration with field).
It would have been obvious to one of ordinary skill in the art at the time of effective filing to register the Zeitouny cameras, as taught by Cosman, in order to provide a free viewpoint image of the patient with needful overlaid graphics. Cosman 1:63–2:3.
Regarding claim 2, Zeitouny in view of Cosman teaches the system of claim 1, wherein the computer is further configured to perform an image registration (Zeitouny ¶¶ 0066–67, matching an image of the patient’s body with a reference image; this process being called “registration” characterized as prior art)
to map coordinates of a patient medical image to an image coordinate system of the first camera (Cosman 13:60–15:15; mapping scanned images to camera coordinate system Xc, Yc, Zc)
the patient medical image is obtained from a medical imaging device that is distinct from the first camera and the second camera (id., scanned image) and
the guiding is based on the image registration (Zeitouny ¶ 0066, determine treatment region).
Regarding claim 3, Zeitouny in view of Cosman teaches the system of claim 2, wherein the patient medical image comprises an X-ray image, a CT scan, and/or a segmented image (Cosman 13:60–14:18, sliced CT, X-ray tomography, PET scan, or the like).
Regarding claim 4, Zeitouny teaches the system of claim 1, wherein the target second positional relationship is a range of acceptable positional relationships (¶ 0081, determine if user is straying from treatment area, ¶¶ 0086–87; automatically switch off treatment).
Regarding claim 5, Zeitouny teaches the system of claim 1, wherein the target second positional relationship is based on a surgical plan (¶¶ 0083–84, treatment indication).
Regarding claim 6, Zeitouny in view of Cosman teaches the system of claim 1, wherein:
the at least one tracker and the at least one second tracker comprise a same tracker (Cosman at, e.g. Fig. 9B, any spot visible to both cameras):
the first camera and second camera generate first images of the same tracker (Zeitouny ¶¶ 0074–77, tracking indication), and
the guiding is based on the pose of the tracker as determined by the first images (Fig. 4B, guiding path for treatment device following tracking indications).
Regarding claim 7, Zeitouny teaches the system of claim 1 wherein the first and second cameras comprise respective first and second inertial sensors (¶ 0106, gyroscopes); and
the guiding is based on measurements from the respective inertial sensors (id., determine orientation of camera from gyroscopes; ¶ 0074, determine treatment indication from orientation).
Regarding claim 8: Zeitouny teaches the system of claim 1, wherein the first positional relationship is . . . a fixed relationship in which the respective camera is rigidly fixed to the patient (¶ 0062, mutually fixed position of treatment device and camera 20; ¶ 0066, treatment region has fixed spatial relationship with imaging region).
Regarding claim 9, Zeitouny in view of Cosman teaches a computer implemented method for performing a navigated surgical procedure on a patient,
wherein a first camera is in a first positional relationship with the patient (¶ 0105, camera 160 having an orientation to skin S),
the first camera having a first field of view aligned relative to a first surgical location (¶ 0074, treatment location), and
the first camera configured to generate image data of at least one first tracker, (id., tracking indications), the method comprising:
defining and presenting a user interface to guide a user (¶ 0080, guiding user over skin surface during treatment) to position a camera in a target second positional relationship with the patient (¶ 0105, skin imaging camera 110),
the second camera having a second field of view for aligning relative to a second surgical location (id., orientation of camera 110 with respect to skin S), and
the second camera configured to generate image data of at least one second tracker (¶ 0074, treatment locations; Cosman 15:49–52, Figs. 8–9, use of “actual physical index markers”);
the first field of view and the second field of view providing an overall field of view (7:45–59, 14:41–15:15, space coordinates relative to the fields of both cameras; 9:7–31, computer-rendered visualization of the field produced from both cameras);
receiving respective image data from each of the first camera and the second camera to obtain real-time tracking information in the overall field of view (7:45–59, using index points to relate movement of the patient relative to the cameras in real time; 9:26–45, computer graphics assisted tracking of entire field in real time),
the real-time tracking information comprising tracker pose data in up to 6 degrees of freedom (14:64–15:6, X, Y, and Z coordinates; 10:13–15, rotation of image data for registering the cameras’ field of view relative to the anatomy; 11:40–46, 15:16–20, 1657–17:5; rotating and translating computer graphics for registration with field); and
performing the navigated surgical procedure using the real-time tracking information from the overall field of view (Zeitouny ¶ 0105, use both cameras to orient the skin imaging camera 110; ¶ 0074, determine treatment indication from orientation);
wherein the at least one first tracker and the at least one second tracker each comprise a plurality of optically detectable targets comprising reflective markers or spheres (Cosman 15:49–52, Figs. 8–9, use of “actual physical index markers”).
Regarding claim 10, Zeitouny in view of Cosman teaches the method of claim 9, comprising performing an image registration (¶¶ 0066–67, matching an image of the patient’s body with a reference image; this process being called “registration” characterized as prior art) to map coordinates of a patient medical image to an image coordinate system of the first camera (Cosman 13:60–15:15; mapping scanned images to camera coordinate system Xc, Yc, Zc); and
wherein:
the patient medical image comprises an X-ray image, a CT scan, and/or a segmented image (Cosman 13:60–14:18, sliced CT, X-ray tomography, PET scan, or the like).
Regarding claim 11, Zeitouny in view of Cosman teaches the method of claim 10, wherein:
the image registration comprises receiving a patient image (¶ 0066, image of a region of skin on the body). . . ,and [sic]
the method comprises determining the target second positional relationship partially based on a spatial profile of the anatomy of the patient (id., “From this location, the location of the treatment region TR can be determined, as it has a predefined, for example fixed spatial relationship with the imaging region IR”).
Regarding claim 12, Zeitouny teaches the method of claim 9, wherein the target second positional relationship is a range of acceptable positional relationships (¶ 0081, determine if user is straying from treatment area, ¶¶ 0086–87; automatically switch off treatment)
Regarding claim 13, Zeitouny teaches the method of claim 9, wherein the target second positional relationship is based on a surgical plan (¶¶ 0083–84, treatment indication).
Regarding claim 14, Zeitouny teaches the method of claim 9, wherein:
the at least one tracker and the at least one second tracker comprise a same tracker (Cosman at, e.g. Fig. 9B, any spot visible to both cameras):
the first camera and second camera generate first images of the same tracker (Zeitouny ¶¶ 0074–77, tracking indication), and
the defining of the user interface to guide is based on the pose of the same tracker as determined by the first images (Fig. 4B, guiding path for treatment device following tracking indications).
Regarding claim 15, Zeitouny teaches the method of claim 9, wherein the first and second cameras comprise respective first and second inertial sensors (¶ 0106, gyroscopes on camera);
the method comprises receiving from each of the first and second cameras measurements from the respective inertial sensors (id., determine orientation of camera from gyroscopes); and
the defining of the user interface to guide is based on the measurements (¶ 0074, determine treatment indication from orientation).
Regarding claim 16, Zeitouny teaches the method of claim 9, wherein the first positional relationship is . . . a fixed relationship in which the respective camera is rigidly fixed to the patient (¶ 0062, mutually fixed position of treatment device and camera 20; ¶ 0066, treatment region has fixed spatial relationship with imaging region).
Regarding claim 17, Zeitouny in view of Cosman teaches the system of claim 3, wherein the target second positional relationship is determined partially on a spatial profile of the patient’s anatomy (Cosman Abstract, “The two camera positions are defined with respect to the patient’s anatomy”).
Regarding claim 18, Zeitouny in view of Cosman teaches the system of claim 1, wherein the instructions, which when executed by the processor, configure the computing device to guide the user to position the second camera in the target second positional relationship to achieve a desired amount of an overlapping filed of view in the first camera and the second camera (Cosman Abstract, “The two camera positions are defined . . . so that the fields-of-view of the cameras include both the patient’s anatomy and the equipment”; 8:1–19, cameras fixed at relative positions).
Regarding claim 19, Zeitouny in view of Cosman teaches the desired amount of the overlapping field of view facilitates tracking of a reference tracker in both of the first field of view and the second field of view (Cosman at, e.g., Fig. 8C, spot 822 within field of view of cameras 798 and 800).
Regarding claim 20, Zeitouny in view of Cosman teaches the method of claim 11, wherein the target second positional relationship is determined partially on a spatial profile of the patient’s anatomy (Cosman Abstract, “The two camera positions are defined with respect to the patient’s anatomy”).
Regarding claim 21, Zeitouny in view of Cosman teaches the method of claim 11, wherein the user interface is defined to guide the user to position the second camera in the target second positional relationship to optimize the overall field of view by minimizing an amount of an overlapping field of view of the first camera and the second camera (Cosman Fig. 2, cameras 204 and 205 placed to have small overlap compared with overlap between cameras 205 and 210, or cameras 204 and 210).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2017/0273595 A1
US 2018/0296283 A1
US 2020/0237445 A1
EP 4079247 A1
The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025).
US 2006/0036148 A1
CA 2797116 A1
CA 3063693 C
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See M.P.E.P. § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 C.F.R. § 1.17(a)) pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David N Werner/Primary Examiner, Art Unit 2487