Prosecution Insights
Last updated: April 19, 2026
Application No. 18/960,123

OPTICAL IMAGING SYSTEM AND METHODS THEREOF

Non-Final OA §102§103§DP
Filed
Nov 26, 2024
Examiner
SCHNURR, JOHN R
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
The University of Akron
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
678 granted / 943 resolved
+13.9% vs TC avg
Moderate +11% lift
Without
With
+10.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
27 currently pending
Career history
970
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 943 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to Application No. 18/960,123 filed 11/26/2024. Claims 1-20 are pending and have been examined. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. § 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that § 112(f) (pre-AIA § 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. § 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that § 112(f) (pre-AIA § 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke § 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke § 112(f) except as otherwise indicated in an Office action. Claim limitations: an image detection module configured to capture; a tracker configured to track; have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because they use a generic placeholder “configured to” coupled with functional language “capture” and “track” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claims 1-10 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Fig. 4A, Image detection module 320, paragraphs [0098], [0165], [0214]-[0217]. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 4-11 and 14-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims1, 4-10 and 13-19 of U.S. Patent No. 12,166,953. Although the claims at issue are not identical, they are not patentably distinct from each other because they are different definitions or descriptions of the same subject matter varying in breadth. For example, note the following relationship between claim 1 of the instant application and the patented claims. Application No. 18/960,123 U.S. Patent No. 12,166,953 1. An optical imaging system to image a target object, comprising: 1. An optical imaging system to image a target object, comprising: an image detection module configured to: capture a preoperative image of the target object before the target object is positioned to be illuminated by at least one or more light rays projected onto the target object, wherein the preoperative image includes a three-dimensional (3D) topography of the target object; an image detection module configured to: capture a preoperative image of the target object before the target object is positioned to be illuminated by at least one or more light rays projected onto the target object, wherein the preoperative image includes a preoperative three-dimensional (3D) topography of the target object, an image projector configured to: emit the one or more light rays to project a corrected projection image of the preoperative image of the target object to be visualized by a user; a projector configured to: emit a first set of the one or more light rays to project a corrected projection image of the preoperative image of the target object to be visualized by a user; and a display configured to: display the corrected projection image to be visualized by the user that is projected onto the display and enable the user to view via a surrounding environment of the user via the display simultaneously with the corrected projection image, wherein the display is partially transparent to light to enable the user to view the surrounding environment with natural vision simultaneously with the corrected projection image via the display; a display that includes a first display that encompasses a totality field of view associated with a first eye of the user and a second display that encompasses a totality field of view associated with a second eye of the user with at least one display that is head-mounted so that the first display is associated with the first eye of the user and the second display is associated with the second eye of the user and is configured to: display via the first display the corrected projection image to be visualized by the user that is projected onto the first display, and display via the second display that is partially transparent to light to enable the user to view via natural vision of the second eye of the user a surrounding environment of the user via the second display simultaneously with the corrected projection image projected onto the first display thereby enabling the user to view the surrounding environment with the natural vision simultaneously with the corrected projection image via the display; and a tracker configured to track a position of the target object; and track a position of the target object, and a controller that includes at least one graphics processing unit and is configured to: map each relative distance determined from image information captured from the 3D topography of the target object included in the preoperative image to the corrected projection image based on a position of the image projector, wherein the corrected projection image incorporates each relative distance determined from the image information captured from the 3D topography of the target object included in the preoperative image to thereby display at least a portion of the 3D topography of the target object included in the preoperative image in the corrected projection image; and a controller that includes at least one graphics processing unit and is configured to: map each relative distance determined from preoperative image data included in the preoperative 3D topography of the target object to the corrected projection image based on a position of the projector, wherein the corrected projection image incorporates each relative distance determined from the preoperative image data included in the preoperative 3D topography of the target object based on the intraoperative topography information of the target object detected from the projection of the one or more polarized light rays onto the target object to thereby display at least a portion of the preoperative 3D topography of the target object included in the preoperative image in the corrected projection image, instruct the image projector to project the corrected projection image to be visualized by the user via the display. instruct the projector to project the preoperative image data included in the corrected projection image to be visualized by the user via the display for visualization based on the co-registered topography information and preoperative information. It would have been obvious to one of ordinary skill in the art to readily recognize that the conflicting claims are different definitions or descriptions of the same subject matter varying in breadth. In this case, the application claims are broader than and inclusive of the patented claims. Claim 4 of the application corresponds to claim 1 of the patent. Claim 5 of the application corresponds to claim 4 of the patent. Claim 6 of the application corresponds to claim 5 of the patent. Claim 7 of the application corresponds to claim 6 of the patent. Claim 8 of the application corresponds to claim 7 of the patent. Claim 9 of the application corresponds to claim 8 of the patent. Claim 10 of the application corresponds to claim 9 of the patent. Claim 11 of the application corresponds to claim 10 of the patent. Claim 14 of the application corresponds to claim 13 of the patent. Claim 15 of the application corresponds to claim 14 of the patent. Claim 16 of the application corresponds to claim 15 of the patent. Claim 17 of the application corresponds to claim 16 of the patent. Claim 18 of the application corresponds to claim 17 of the patent. Claim 19 of the application corresponds to claim 18 of the patent. Claim 20 of the application corresponds to claim 19 of the patent. Claims 2, 3, 12 and 13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 10 of U.S. Patent No. 12,166,953 in view of Casas (US 2016/0191887). Consider claim 2, the patented claims clearly teach the tracker. (Claim 1) However, the patented claims do not explicitly teach the tracker is further configured to track the position of the target object based on optical tracking. In an analogous art, Casas, which discloses a system for surgical imaging, clearly teaches the tracker is further configured to track the position of the target object based on optical tracking. (Optical tracking means 136 track objects using optical tracking, [0104], [0117].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the patented claims by the tracker is further configured to track the position of the target object based on optical tracking, as taught by Casas, to achieve the predictable result of tracking the object. Consider claim 3, the patented claims clearly teach the tracker. (Claim 1) However, the patented claims do not explicitly teach the tracker is further configured to track the position of the target object based on electromagnetic tracking. In an analogous art, Casas, which discloses a system for surgical imaging, clearly teaches the tracker is further configured to track the position of the target object based on electromagnetic tracking. (Tracking means 136 include electromagnetic tracking, [0109].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the patented claims by the tracker is further configured to track the position of the target object based on electromagnetic tracking, as taught by Casas, to achieve the predictable result of tracking the object. Claim 12 of the application corresponds to claim 10 of the patent in view of Casas. Claim 13 of the application corresponds to claim 10 of the patent in view of Casas. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4 and 11-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Casas (US 2016/0191887). Consider claim 1, Casas clearly teaches an optical imaging system to image a target object, (Fig. 1) comprising: an image detection module (Fig. 1: Preoperative imaging device 102, [0030]) configured to: capture a preoperative image of the target object before the target object is positioned to be illuminated by at least one or more light rays projected onto the target object, wherein the preoperative image includes a three-dimensional (3D) topography of the target object; (Preoperative imaging device 102 captures 3D data of the patient, [0030], [0041], [0042], [0073].) an image projector (Fig. 1: Display system 126, [0118]) configured to: emit the one or more light rays to project a corrected projection image of the preoperative image of the target object to be visualized by a user; (Display system 126 can project the processed images 124 onto the patient, [0118].) a display (Fig. 1: Display system 126, [0033]) configured to: display the corrected projection image to be visualized by the user that is projected onto the display and enable the user to view via a surrounding environment of the user via the display simultaneously with the corrected projection image, wherein the display is partially transparent to light to enable the user to view the surrounding environment with natural vision simultaneously with the corrected projection image via the display; (Display system 126 can be a head-mounted 213 stereoscopic optical see-through display 214 to display the processed images 124 to the surgeon simultaneously with the surrounding environment, [0037], [0117], [0125], [0143], [0197].) a tracker configured to track a position of the target object; (Fig. 1: Tracking means 136 tracks the positioning of the patient, [0035], [0093]-[0095].) and a controller that includes at least one graphics processing unit (Fig. 6, [0180]-[0191]) and is configured to: map each relative distance determined from image information captured from the 3D topography of the target object included in the preoperative image to the corrected projection image based on a position of the image projector, wherein the corrected projection image incorporates each relative distance determined from the image information captured from the 3D topography of the target object included in the preoperative image to thereby display at least a portion of the 3D topography of the target object included in the preoperative image in the corrected projection image; (The preoperative 3D data is registered with the stereoscopic video and corrected for movement of the patient, [0042], [0059]-[0061], [0088], [0093]-[0099], [0109], [0125].) and instruct the image projector to project the corrected projection image to be visualized by the user via the display. (Processed images 124 are displayed to the surgeon, [0117], [0118].) Consider claim 2, Casas clearly teaches the tracker is further configured to track the position of the target object based on optical tracking. (Optical tracking means 136 track objects using optical tracking, [0104], [0117].) Consider claim 3, Casas clearly teaches the tracker is further configured to track the position of the target object based on electromagnetic tracking. (Tracking means 136 include electromagnetic tracking, [0109].) Consider claim 4, Casas clearly teaches the display is further configured to display the corrected projection image and enable the user to view the surrounding environment of the user simultaneously via at least one wearable display. ([0117]) Consider claim 11, Casas clearly teaches a method for imaging a target object, (Fig. 3) comprising: capturing by an image detection module a preoperative image of the target object before the target object before the target object is positioned to be illuminated by at least one or more light rays projected onto the target object, wherein the preoperative image includes a three-dimensional (3D) topography of the target object; (Preoperative imaging device 102 captures 3D data of the patient, [0030], [0041], [0042], [0073].) emitting by an image projector the one or more light rays to project a corrected projection image of the preoperative image of the target object to be visualized by a user; (Display system 126 can project the processed images 124 onto the patient, [0118].) displaying by a display the corrected projection image to be visualized by the user that is projected onto the display and enable the user to view via a surrounding environment of the user via the display simultaneously with the corrected projection image, wherein the display is partially transparent to light to enable the user to view the surrounding environment with natural vision simultaneously with the corrected projection image via the display; (Display system 126 can be a head-mounted 213 stereoscopic optical see-through display 214 to display the processed images 124 to the surgeon simultaneously with the surrounding environment, [0037], [0117], [0125], [0143], [0197].) tracking by a tracker a position of the target object; (Fig. 1: Tracking means 136 tracks the positioning of the patient, [0035], [0093]-[0095].) and mapping by a controller each relative distance determined from image information captured from the 3D topography of the target object included in the preoperative image to the corrected projection image based on a position of the image projector, wherein the corrected projection image incorporates each relative distance determined from the image information captured by the 3D topography of the target object included in the preoperative image to thereby display at least a portion of the 3D topography of the target object included in the preoperative image in the corrected projection image; (The preoperative 3D data is registered with the stereoscopic video and corrected for movement of the patient, [0042], [0059]-[0061], [0088], [0093]-[0099], [0109], [0125].) and instructing the image projector to project the corrected projection image to be visualized by the user via the display. (Processed images 124 are displayed to the surgeon, [0117], [0118].) Consider claim 12, Casas clearly teaches the tracking comprises tracking the position of the target object based on optical tracking. (Optical tracking means 136 track objects using optical tracking, [0104], [0117].) Consider claim 13, Casas clearly teaches n the tracking further comprises tracking the position of the target object based on electromagnetic tracking. (Tracking means 136 include electromagnetic tracking, [0109].) Consider claim 14, Casas clearly teaches the displaying comprises displaying the corrected projection image and enabling the user to view the surrounding environment of the user simultaneously via at least one wearable display. ([0117]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-10 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 2016/0191887) in view of West et al. (US 2014/0276002), herein West. Consider claim 5, Casas combined with Farkas clearly teaches the controller is further 15configured to: 20register preoperative image data included in the corrected projection image to the intraoperative image data included in the intraoperative image space that is captured by the imaging system as the position of the target object is tracked by the tracker; and instruct the display to display the preoperative image data included in the corrected projection image that is co-registered to the intraoperative image data that is captured by the imaging system as the position of the target object is tracked by the tracker. (Registration of the 3D volume and stereoscopic video is performed to provide the augmented reality display, [0059]-[0063], [0078], [0080], [0087]-[0090].) However, Casas combined with Farkas does not explicitly teach the controller is further 15configured to: calculate a transformation matrix between a preoperative image space that the corrected projection image is projected via the display and an intraoperative image space that an imaging system captures as the tracker tracks the position of the target object. In an analogous art, West, which discloses a system for surgical imaging, clearly teaches the controller is further 15configured to: calculate a transformation matrix between a preoperative image space that the corrected projection image is projected via the display and an intraoperative image space that an imaging system captures as the tracker tracks the position of the target object. (Fig. 2: The image space transformation calculator 124 generates a corresponding image space transformation matrix 126 that provides a transformation of the intraoperative image data 122 into the preoperative image data 56, [0051].) Therefore, before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify the system of Casas combined with Farkas by the controller is further 15configured to: calculate a transformation matrix between a preoperative image space that the corrected projection image is projected via the display and an intraoperative image space that an imaging system captures as the tracker tracks the position of the target object, as taught by West, to achieve the predictable result of registering the preoperative and intraoperative data. Consider claim 6, Casas combined with West clearly teaches the controller is further configured to instruct the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of the preoperative image data included in the corrected projection image that is co-registered to the intraoperative image data that is captured by the imaging system as the position of the target object is tracked by the tracker. ([0117] Casas) Consider claim 7, Casas combined with West clearly teaches the controller is further configured to: calculate a transformation matrix between the intraoperative image space and an intraoperative object space, wherein the intraoperative space is a space occupied by the target object that a surgical procedure is being performed; ([0051] West) register the intraoperative image data as captured by the imaging system as the position of the target of interest is tracked by the tracker to the intraoperative object space that is occupied by the target object during the surgical procedure; and instruct the display to display the intraoperative image data in the corrected projection image that is co-registered to the intraoperative object space that is occupied by the target object. (Registration of the 3D volume and stereoscopic video is performed to provide the augmented reality display, [0059]-[0063], [0078], [0080], [0087]-[0090] Casas.) Consider claim 8, Casas combined with West clearly teaches the controller is further configured to instruct the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of the intraoperative image data included in the corrected projection image that is co-registered with the intraoperative object space as the position of the target object is tracked by the tracker. ([0117] Casas) Consider claim 9, Casas combined with West clearly teaches the controller is further configured to instruct the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of fluorescence image data included in the corrected projection image that is co-registered with the intraoperative object space. ([0073], [0117] Casas) Consider claim 10, Casas combined with West clearly teaches the controller is further configured to instruct the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of color image data included in the corrected projection image that is co-registered with the intraoperative object space. ([0078], [0130], [0117] Casas) Consider claim 15, Casas combined with West clearly teaches calculating a transformation matrix between a preoperative image space that the corrected projection image is projected via the display and an intraoperative image space that an imaging system captures as the position of the target object is tracked; ([0051] West) registering preoperative image data included in the corrected projection image to the intraoperative image data included in the intraoperative space that is captured by the imaging system as the position of the target object is tracked by the tracker; and instructing the display to display the preoperative image data included in the corrected projection image that is co-registered to the intraoperative image data that is captured by the imaging system as the position of the target object is tracked by the tracker. (Registration of the 3D volume and stereoscopic video is performed to provide the augmented reality display, [0059]-[0063], [0078], [0080], [0087]-[0090] Casas.) Consider claim 16, Casas combined with West clearly teaches the instructing the display comprises: instructing the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of the preoperative image data included in the corrected projection image that is co-registered to the intraoperative image data that is captured by the imaging system as the position of the target object is tracked by the tracker. ([0117] Casas) Consider claim 17, Casas combined with West clearly teaches calculating a transformation matrix between the intraoperative image space and an intraoperative object space, wherein the intraoperative image space is a space occupied by the target object that a surgical procedure is being performed; ([0051] West) registering the intraoperative image data as captured by the imaging system as the position of the target of interest is tracked by the tracker to the intraoperative object space that is occupied by the target object during the surgical procedure; and instructing the display to display the intraoperative image data in the corrected projection image that is co-registered to the intraoperative object space that is occupied by the target object. (Registration of the 3D volume and stereoscopic video is performed to provide the augmented reality display, [0059]-[0063], [0078], [0080], [0087]-[0090] Casas.) Consider claim 18, Casas combined with West clearly teaches the instructing the display comprises: instructing the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of the intraoperative image data included in the corrected projection image that is co-registered with the intraoperative object space as the position of the target object is tracked by the tracker. ([0117] Casas) Consider claim 19, Casas combined with West clearly teaches the instructing the display further comprises: instructing the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of fluorescence image data included in the corrected projection image that is co-registered with the intraoperative object space. ([0073], [0117] Casas) Consider claim 20, Casas combined with West clearly teaches the instructing the display further comprises: instructing the display to enable the user to view the surrounding environment of the user via the display simultaneously with the display of color image data included in the corrected projection image that is co-registered with the intraoperative object space. ([0078], [0130], [0117] Casas) Conclusion In the case of amending the claimed invention, applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN R SCHNURR whose telephone number is (571)270-1458. The examiner can normally be reached M-F 6a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN R SCHNURR/ Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Nov 26, 2024
Application Filed
Nov 19, 2025
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593962
ENDOSCOPE SYSTEM AND COORDINATE SYSTEM CORRECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598359
DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12587703
VIDEO DISPLAY SYSTEM, OBSERVATION DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12587729
Method And System For A Trail Camera With Modular Fresnel Lenses
2y 5m to grant Granted Mar 24, 2026
Patent 12579603
IMAGE PROJECTION DEVICE AND METHOD FOR OPERATING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
83%
With Interview (+10.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 943 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month