Prosecution Insights
Last updated: April 19, 2026
Application No. 17/536,011

Transfer of Alignment Accuracy Between Visible Markers Used with Augmented Reality Displays

Non-Final OA §103
Filed
Nov 27, 2021
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Novarad Corporation
OA Round
5 (Non-Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07/28/2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-11, 14-29 and 32-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Holladay et al., U.S. Patent Publication Number 2020/0321099 A1,in view of Chopra et al., U.S. Patent Publication Number 2022/0008141 Al, further in view of Gullotti et al., U.S. Patent Publication Number 2019/0209080 A1 and Speck et al., U.S. Patent Publication Number 2015/0331078 A1. Regarding claim 1, Holladay discloses a method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: registering one or more initial visible markers attached to the body of the person, wherein the initial visible marker is located in a fixed position relative to an image visible marker (paragraph 0015, registering a tracking system and one or more models with an augmented reality (AR) visual field; marker device includes a first fiducial marker to provide a pattern that is visible in an image generated by a set of cameras have a fixed position with respect to a visualization space, e.g., the AR visual field; paragraph 0028, marker is placed adjacent or attached to a patient’s body); identifying a location of one or more additional visible markers physically attached to the body of the person and in a 3D coordinate system as viewed by the AR headset of one or more initial visible markers (paragraph 0016, the marker device also include one or more second fiducial markers, detectable by a three-dimensional spatial tracking system; paragraph 0080, the position of the object sensor 438 within the patient's body, as represented by tracking data 426, may be registered into the AR space); transferring the 3D coordinate system as viewed by the AR headset of the one or more initial visible markers to the one or more additional visible markers (paragraph 0017, using the known, fixed position, of each camera with respect to the AR device, the identified portions of the marker pattern are converted to corresponding three-dimensional location in a three-dimensional spatial coordinate system of the AR system; paragraph 0080, generate additional transform matrices 412 and/or 414 to enable co- registration of additional data and visualization in the coordinate system); (provides the image data 472 (e.g., including a known location of the object tracking sensor 438) that may be used to facilitate generating the transform matrix) aligning the image data set (paragraph 0020, aligning one or more objects which have a Spatial position and orientation known in another coordinate system, with the coordinate system of the AR display). However it is noted that Holladay discloses providing the image data, and aligning one or more objects with a known spatial position and coordinate system, but fails to specifically one or more additional marker, in preparation for the one or more initial markers losing their alignment accuracy to align the image data set with the body of the person and further fails to disclose utilizing the one or more additional visible markers in aligning the image data set with the body of the person. Gullotti discloses registering one or more initial visible markers attached to the body of the person (paragraph 0027, registering the location of one or more fiducial markers inside or outside a surgical site of the patient; figure 4A), wherein the initial visible marker is located in a fixed position relative to an image visible marker (paragraph 0704, register the fiducial’s 3D location and orientation with respect to the coordinates of the 3D-tracking acquisition unit; paragraph 0006, radiopaque markers configured to be visually observable using an X-ray source or imager; paragraph 0032, one fixed or mobile marker); identifying a location of one or more additional visible markers physically attached to the body of the person and in a 3D coordinate system (paragraph 0497, the placement of one or more additional surface marker fiducials) as viewed by the AR headset of one or more initial visible markers (paragraph 0041, 3D tracking camera or imaging system configured to track the at least one trackable marker; figure 31, paragraph 0604, body-mounted 3D-tracking camera); transferring the 3D coordinate system, of the one or more initial visible markers to the one or more additional visible markers, in preparation for the one or more initial visible markers losing their alignment accuracy to align the image data set with the body of the person (paragraph 0019, register a unique orientation of the coordinate axes of the depth-stop fiducial, and/or detect how the depth-stop fiducial rotates and translates in 3D space between one or more registrations; paragraph 0515, after the surgical drape is applied over the skin-mounted fiducial, the over-the-drape-mating fiducial can then be used to interpret the position of the underlying skin-mounted fiducial; paragraph 0697, mating the second-half fiducial to the original fiducial marker placed on or inside soft tissue to maintain access to the fiducial after the introduction of surgical drapes and other obstructing materials; figure 6A). However, it is noted that Holladay in view of Gullotti fail to disclose the additional marker registered and utilizing the one or more additional visible markers in aligning the image data set with the body of the person. Chopra discloses registering one or more initial visible markers attached to the body of the person (paragraph 0012, one or more fiducial markers placed on a patient body; paragraph 0090, fiducials may be registered with the control unit), wherein the initial visible marker is located in a fixed position relative to an image visible marker (paragraph 0164, may use image scan data combined with one or more fiducial marker positions; the two image types can be correlated, and combined with an image correlation with a visual image and the electromagnetic image set 2010); identifying a location of one or more additional visible markers physically attached to the body of the person and in a 3D coordinate system as viewed by the AR headset of one or more initial visible markers (paragraph 0165, identify markers in the image; determines if a marker is found; if the markers are found then are registered with Mct); transferring the 3D coordinate system, as viewed by the AR headset, of the one or more initial visible markers to the one or more additional visible markers (paragraph 0165, if the markers are found then are registered with Mct); and utilizing the one or more additional visible markers in aligning the image data set with the body of the person (paragraph 0178, the system may also use another exterior image set using fiducial markers having the same location as the first set). It is further noted that Chopra, Holladay, and Gullotti fail to specifically recite transferring the 3D coordinate system. Speck discloses transferring the 3D coordinate system (paragraph 0033, at least one second marker is provided to be arranged on a movable object such that the position and the orientation of the marker in the coordinate system of the tracking system can be detected during imaging and can be transferred to the coordinate system of the imaging system. This enables tracking of the orientation and position of image recording in a continuous fashion). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the co- registration of additional data as disclosed by Holladay, preparing for markers covered by drapes of loss of alignment as disclosed by Gullotti, to maintain access to the fiducial after the introduction of surgical drapes and other obstructing materials. It further would have been obvious to include the additional fiducial markers in aligning an image as disclosed by Chopra, to produce an enhanced reality vision with matched fiducials, having in a high level of confidence, in that one of ordinary skill in the art would recognize the more fiducials/landmarks/points used to align to more accurate the final alignment of the correlated visual picture. It further would have been obvious to one having ordinary skill in the art to transfer the coordinate system as disclosed by Speck to enable tracking in a continuous fashion. Regarding claim 2, Holladay discloses further comprising identifying that the one or more initial visible markers have diminished accuracy for alignment of the image data set (paragraph 0020, can include objects that are not visible within a visual field of the AR device; the sensors may be hidden from sight, including positioned within a patient’s body). Chopra - paragraph 0165, system attempts to identify markers in the image. Regarding claim 3, Holladay discloses wherein the one or more initial visible markers with a diminished accuracy for alignment are at least one of: one or more initial visible markers which have moved, one or more initial visible markers which potentially will move, a portion of one or more initial visible markers is covered, or a portion of the one or more initial visible markers is not recognizable (paragraph 0055, provides each of locations as 3D spatial coordinates in the tracking system coordinate space and may remain fixed if the marker device does not move in the tracking space or may vary over time if the marker device moves in tracking space). Chopra further discloses in paragraph 0092, the fiducial marker may move in three dimensions during the course of a medical procedure; it should be appreciated that as a patient breaths, or moves for any reason, the fiducial marker will also move by an amount corresponding to its placement on the patient body; paragraph 0165, determines if a marker is found; if the markers are not found, the image is rejected and a new image is captured. Regarding claim 4, Holladay discloses wherein utilizing the one or more additional visible markers further comprises utilizing the additional visible markers by at least one of: emphasizing, de-emphasizing, terminating, or removing the use of the additional visible markers (Holladay - paragraph 0037, multimodal marker device can be placed near a patient, e.g., next to or on the patient, during the acquisition of the first and second images; can be placed in a visibly unobstructed surface or attached to the patient’s body; one side surface of the marker that includes a fiducial marker located within a white colored border to provide contrast between the white border and the thick blacker border of the fiducial marker). Regarding claim 5, Holladay discloses further comprising at least one of: de-emphasizing, terminating, removing, or emphasizing the use of the initial visible markers in aligning the image data set (paragraph 0029, marker identification may be fully automated and/or be user-interactive in response to a user input identifying the markers). Regarding claim 6, Holladay discloses further comprising aligning the image data set with the body of the person using one or more visible markers on the body of the person as viewed through the AR headset and the fixed position of the image visible marker with respect to the visible marker as referenced to a representation of the image visible marker in the image data set (paragraph 0033, align internal anatomical structures (that are not visible in the real world) with the patient's body in the spatial coordinate system of the AR display, which may be moving with respect to the patient's body. Advantageously, by implementing the method 100, the transform computed at 110 changes in response to changing information in the acquired images). Regarding claim 7, Holladay discloses further comprises receiving a notification that the one or more initial visible markers have moved upon detecting that a relative distance between two visible markers on body of the person or on skin has changed, wherein the two visible markers become displaced visible markers (paragraph 0118, provides a visualization of the 2D images registered in the 3D image based on the transform T2. If the landmarks are properly aligned, as shown on the display 510, no correction may be needed. However, if the locations of landmarks in the 2D image do not align with their respective locations in the 3D image, correction may be needed to T2). Chopra paragraphs 0191-0194, patient marker fiducials are correctly aligned and verified, so P.sub.o’ overlays on P.sub.o in the enhanced reality image, but the reference and sensed co-ordinates of a known model point do not line up and the system detects; there may be non-rigid motions so they won't align. The non rigid motion may be described in three categories: [0192] (1) Due to the marker patch (Po) motion on skin; [0193] (2) Due to model (Mi) movement, and [0194] (3) Due to Tool perforating out of the model constraints. Regarding claim 8, Holladay discloses further comprising de- emphasizing or terminating use of the displaced visible markers for aligning the image data set (paragraph 0117, a user initiates corrections using mouse-down/drag/mouse-up action or other actions through the user interface; such transformations thus allow a user to change the view of a single image or the alignment of multiple images; paragraph 0118, if landmarks are properly aligned, as shown on the display, no corrections may be needed). Regarding claim 9, Holladay discloses further comprising de- emphasizing the one or more initial visible markers and emphasizing only the additional visible markers (paragraph 0118, implementing corrections for transform T2, the domain registration manager 494 applies the transform T2 to the image data 472 and the output generator 512 provides a visualization of the 2D images registered in the 3D image based on the transform T2. If the landmarks are properly aligned, as shown on the display 510, no correction may be needed. However, if the locations of landmarks in the 2D image do not align with their respective locations in the 3D image, correction may be needed to T2). Regarding claim 10, Holladay discloses further comprising re-aligning the image data set with the body of the person using the one or more additional visible markers that have been emphasized for alignment (paragraph 0118, a user can thus adjust the alignment of the 2D image with respect to the 3D image through the user interface; adjustment may include translation in two dimensions, rotation and/or scaling in response to instructions; may update the visualization shown in the display to show the image registration in response to each adjustment; paragraph 0122, corners of the marker, or other portions of thereof, may be illuminated or otherwise differentiated in the output visualization to confirm that such portions of the marker are properly registered). Chopra- paragraph 0188, realign markers from the scan image position to the actual position. Regarding claim 11, Holladay discloses further comprising identifying one or more additional visible markers which have been added to the body of the person and anchored to an inner physical layer of the body of the person (paragraph 0020, aligning one or more objects (physical and/or virtual objects), which have a spatial position and orientation known in another coordinate system, with the coordinate system of the AR display. The objects can include objects (e.g., sensors and/or models representing internal anatomical structures) that are not visible within a visual field of the AR device. For example, one or more sensors have position and orientation detectable by a three-dimensional tracking system. The sensors may be hidden from sight, including positioned within a patient's body as well as be part of a marker device). Regarding claim 14, Holladay discloses further comprising identifying displaced visible markers due to a change in relative distance of the displaced visible marker with respect to at least one of: another visible marker, a visible landmark on the body of the person, a visible anatomical feature of the body of the person, a visible facial feature, a visible bone protrusion, a visible tissue protrusion, or a visible body contour (paragraph 0122, annotations are shown in output visualization to provide the user with addition information, such as distance from an object, e.g., to which an object tracking sensor is attached). Chopra paragraph 0186, displacement of a known fiducial marker between it real image coordinates and sensed coordinates. Regarding claim 15, Holladay discloses wherein registering the one or more additional visible markers includes transferring at least a portion of alignment influence for the image data set to the one or more additional visible markers (paragraph 0118, a user can thus adjust the alignment of the 2D image with respect to the 3D image through the user interface; adjustment may include translation in two dimensions, rotation and/or scaling in response to instructions; may update the visualization shown in the display to show the image registration in response to each adjustment). Regarding claim 16, Holladay discloses a method for using an augmented reality (AR) headset to co- localize an image data set with a body of a person as viewed through the AR headset, comprising: obtaining a generated image of at least a portion of the body of the person represented image data set (paragraph 0080, generate additional transform matrices 412 and/or 414 to enable co-registration of additional data and visualization in the coordinate system; figure 9); aligning the image data set to the x-ray generated image by using data fitting to align identified anatomical structures in the image data set and the x-ray generated image (paragraph 0020, aligning one or more objects which have a spatial position and orientation known in another coordinate system, with the coordinate system of the AR display); registering one or more additional optical codes physically attached to the body of the person and in a 3D coordinate system of one or more initial optical codes (paragraph 0015, registering a tracking system and one or more models with an augmented reality (AR) visual field; marker device includes a first fiducial marker to provide a pattern that is visible in an image generated by a set of cameras have a fixed position with respect to a visualization space, e.g., the AR visual field; paragraph 0028, marker is placed adjacent or attached to a patient’s body); receiving a notification that one or more initial optical codes have moved (figure 9). However it is noted that Holladay fails to disclose utilizing the additional optical codes to maintain alignment of the image data set with the body of the person by applying the 3D coordinate system as viewed by the AR headset to the additional optical codes. Chopra discloses x-ray (paragraph 0084, data from pre-operative computed tomography angiography scan may be combined with visual image scans of a patient using one or more fiducial markers on or in the patient); (registering one or more initial visible markers attached to the body of the person (paragraph 0012, one or more fiducial markers placed on a patient body; paragraph 0090, fiducials may be registered with the control unit), (paragraph 0184, sensors of known location and position relative to the markers; paragraph 0178, control unit can collect the exterior image of the patient having fiducial markers on the skin); and utilizing the additional code to maintain alignment of the image data set with the body of the person by applying the 3D coordinate system as viewed with by the AR headset to the additional optical codes (paragraph 0164, using the fiducial markers as reference points to help correlate the visual picture; paragraph 0178, the two maps are then combined and correlated to produce and enhanced reality vision of the internal anatomy of a patient matched to the exterior fiducials). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the co- registration of additional data as disclosed by Holladay, the additional data as x-ray data and utilizing additional codes as disclosed by Chopra, to produce an enhanced reality vision with matched fiducials, having in a high level of confidence, in that one of ordinary skill in the art would recognize the more fiducials/landmarks/points used to align to more accurate the final alignment of the correlated visual picture. Regarding claim 17, Holladay discloses wherein receiving a notification that the one or more initial optical codes have moved is performed by either: detecting that a relative distance between two optical codes on an outer layer of the body of the person or skin has changed, wherein the two optical codes become displaced optical codes; or receiving a user interface control message from a user that a position of the optical codes has changed due to movement of the skin (Holladay figure 9). Regarding claim 18, Holladay discloses wherein utilizing the additional optical codes further comprises emphasizing use of the additional optical codes (Holladay - paragraph 0122, corners of the marker, or other portions of thereof, may be illuminated or otherwise differentiated in the output visualization to confirm that such portions of the marker are properly registered). Regarding claim 19, Holladay discloses further comprising aligning the image data set and x- ray generated image with patient anatomy viewable through the AR headset using an initial optical code formed in a radiopaque marker represented in the x-ray generated image as referenced to the initial optical code formed in the radiopaque marker visible on the body (paragraph 0024, the marker device includes one or more radio-opaque objects in the tracking pad having a known position and orientation with respect to one or more tracking sensors; the marker device enables determining a transform to spatially align the space of the tracking system with the intra-operative; paragraph 0026, another transform is determined to spatially align the coordinate systems of the intra-operative images with the pre-operative CT scan). Chopra further discloses paragraph 0079, radiopaque markers or other elements that can be detected by a scanning operation and export detected signals to a control unit. Regarding claim 20, Holladay discloses further comprising de- emphasizing or terminating use of the displaced optical codes for aligning the image data set. (paragraph 0117, a user initiates corrections using mouse-down/drag/mouse-up action or other actions through the user interface; such transformations thus allow a user to change the view of a single image or the alignment of multiple images; paragraph 0118, if landmarks are properly aligned, as shown on the display, no corrections may be needed). Regarding claim 21, Holladay discloses further comprising de- emphasizing initial optical codes for use in alignment which are not additional optical codes. (paragraph 0117, a user initiates corrections using mouse-down/drag/mouse-up action or other actions through the user interface; such transformations thus allow a user to change the view of a single image or the alignment of multiple images; paragraph 0118, if landmarks are properly aligned, as shown on the display, no corrections may be needed). Regarding claim 22, Holladay discloses further comprising re-aligning the image data set with the body of the person using the one or more additional optical codes, which have been emphasized, as viewed through the AR headset (paragraph 0118, a user can thus adjust the alignment of the 2D image with respect to the 3D image through the user interface; adjustment may include translation in two dimensions, rotation and/or scaling in response to instructions; may update the visualization shown in the display to show the image registration in response to each adjustment; paragraph 0122, the corners of the marker may be illuminated or otherwise differentiated in the output visualization to confirm that such portions of the marker are properly registered). Regarding claim 23, Holladay discloses further comprising identifying one or more additional optical codes which have been added to the body of the person and anchored to an inner layer of the body of the person (paragraph 0020, can include objects that are not visible within a visual field of the AR device; the sensors may be hidden from sight, including positioned within a patient’s body). Regarding claim 24, Holladay discloses further comprising identifying an additional optical code attached to at least one of: a bone of the body of the person, a bone pin placed in a bone of the body of the person, an organ, a blood vessel or an inner tissue of the body of the person (figure 9). Regarding claims 25, it is rejected based upon similar rational as above claims 16. Holladay discloses a system (400, system) for using an augmented reality (AR) headset (AR display, 510) to co- localize an image data set, containing a radiopaque marker, with a body of a person as viewed through the AR headset (figure 9), comprising: at least one processor of the AR headset (paragraph 0051, processor); a memory device (101) of the AR headset including instructions that, when executed by the at least one processor, cause the system to: obtaining an generated image of at least a portion of the body of the person represented in the image data set (figure 9); aligning the image data set to the generated image by using data fitting to align identified anatomical structures in the image data set and the x-ray generated image (paragraph 0018, align the tracking sensor with a coordinate system; register a tracking coordinate system with a prior three- dimensional (3D) image scan; may be a CT scan, MRI; paragraph 0025); registering one or more additional optical codes physically attached to the body of the person and in applying a 3D coordinate system of as viewed by the AR headset for one or more initial optical codes to the additional optical codes (paragraph 0015, utilizes a marker device that includes fiducial markers detectable by more than one modality; includes a first fiducial marker that is visible in an image generated by a set of cameras having a fixed position with respect to a visualization space and another set of one or more markers detectable by a three-dimensional spatial tracking system; paragraph 0018, register a tracking coordinate system with a prior three- dimensional (3D) image scan). Regarding claim 26, Holladay discloses wherein utilizing the additional optical codes further comprises emphasizing use of the additional optical codes to maintain alignment of the image data set with the body of the person and terminating use of displaced optical codes for purposes of aligning the image data set with the body of the person (paragraph 0117,a user initiates corrections using mouse-down/drag/mouse-up action or other actions through the user interface; such transformations thus allow a user to change the view of a single image or the alignment of multiple images; paragraph 0118, if landmarks are properly aligned, as shown on the display, no corrections may be needed). Regarding claim 27, Holladay discloses further comprising re-aligning the image data set with the body of the person using the one or more additional optical codes that have been emphasized (paragraph 0118, a user can thus adjust the alignment of the 2D image with respect to the 3D image through the user interface; adjustment may include translation in two dimensions, rotation and/or scaling in response to instructions; may update the visualization shown in the display to show the image registration in response to each adjustment; paragraph 0122, the corners of the marker may be illuminated or otherwise differentiated in the output visualization to confirm that such portions of the marker are properly registered). Regarding claim 28, Holladay discloses further comprising identifying an additional optical code added to the body of the person that is attached to at least one of: a bone of the body of the person, a bone pin in a bone of the body of the person, an organ, a blood vessel or an inner tissue of the body of the person (Holladay -paragraph 0053, one or more multi-modal marker devices can be attached to the patient’s body or placed near the patient body; the combination marker system can include one or more marker tracking sensors; one or more sensors can be affixed relative to an object that is movable within the patient’s body). Regarding claim 29, Holladay discloses further comprising identifying one or more additional optical codes that have become displaced optical codes due to a change in relative distance from at least one of: a visible landmark of the body of the person, a visible anatomical feature of the body of the person, a visible facial feature, a visible bone protrusion, a visible tissue protrusion, or a visible body contour (figure 9). Regarding claim 32, Holladay discloses further comprising transferring of alignment accuracy in the 3D coordinate system as viewed by the AR headset of the one or more initial visible markers to the one or more additional visible markers (paragraph 0080, generate additional transform matrices 412 and/or 414 to enable co-registration of additional data and visualization in the coordinate system; figure 9). Regarding claim 33, Holladay discloses further comprising transferring of alignment accuracy to the additional visible markers after aligning the image data set with the body of the person using one or more visible markers on the body of the person (figure 9). Response to Arguments Applicant’s arguments, see pages 8-11, filed 07/28/2025, with respect to the rejection(s) of claim(s) 1-11, 14-29, 32-33 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 103, Holladay, Chopra in view of Gullotti. Applicant argues the prior art cited fails to disclose "transferring the 3D coordinate system, as viewed by the AR headset, of the one or more initial visible markers to the one or more additional visible markers, in preparation for the initial visible markers losing their alignment accuracy to align the image data set with the body of the person". Examiner responds Gullotti discloses paragraph 0019, register a unique orientation of the coordinate axes of the depth-stop fiducial, and/or detect how the depth-stop fiducial rotates and translates in 3D space between one or more registrations; paragraph 0515, after the surgical drape is applied over the skin-mounted fiducial, the over-the-drape-mating fiducial can then be used to interpret the position of the underlying skin-mounted fiducial; figure 6A, and paragraph 0697, mating the second-half fiducial to the original fiducial marker placed on or inside soft tissue to maintain access to the fiducial after the introduction of surgical drapes and other obstructing materials. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Crawford et al., U.S. Patent Publication Number 2019/0029765 A1 Crawford discloses paragraph 0146, the initial reference array, however, may be in a position that obstructs the surgical procedure. The surgeon may therefore wish to attach a new reference array at another location to complete the medical procedure (e.g., a surgery). Rather than starting over with a new registration which may require capturing locations of the markers of the new reference array with respect to the 3D tracking cameras 200 and with respect to the medical image volume, processor 2007 may computationally transfer registration of the initial reference array to provide registration for the new reference array. Stated in other words, because the new reference array and the initial reference array are fixed to the same rigid body (e.g., bone or bones that are currently stationary), if the position of the new reference array relative to the initial reference array is detected in one coordinate system (e.g., in the camera coordinate system), the position of the new reference array relative to the initial reference array may be assumed to be the same in the other coordinate system (e.g., the image coordinate system). Lang, U.S. Patent Publication Number 2017/0258526 A1 Lang discloses paragraph 0146, markers can be attached to an OHMD and, optionally, portion or segments of the patient or patient’s anatomy; paragraph 0146, OHMD and the patient or patient’s anatomy can be cross-reference in this manner or registered in one or more coordinate system used by the navigation system and movements of the OHMD or the operator wearing the OHMD can be registered in relationship to the patient within these one or more coordinate systems; paragraph 0146, once the virtual data and live data of the patient and the OHMD are registered in the same coordinate system, e.g., using IMUs, optical markers, navigation markers including infrared markers, retroreflective markers, RF markers, and any other registration method described in the specification or known in the art, any change in position of nay of the OHMD in relation to the patient measured in this fashion can be used to move virtual data of the patient in relationship to live data of the patients; paragraph 0146, visual image of the virtual data of the patient and the live data of the patient seen through the OHDM are always aligned, irrespective of movement of the OHMDs and/or the operation; Paragraph 0146, foregoing embodiments can be achieved since the IMU's, optical markers, RF markers, infrared markers and/or navigation markers placed on the operator and/or the patient as well as any spatial anchors can be registered in the same coordinate system as the primary OHMD and any additional OHMD's. The position, orientation, alignment, and change in position, orientation and alignment in relationship to the patient and/or the surgical site of each additional OHMD can be individually monitored thereby maintaining alignment and/or superimposition of corresponding structures in the live data of the patient and the virtual data of the patient for each additional OHMD irrespective of their position, orientation, and/or alignment in relationship to the patient and/or the surgical site. Elimelech et al., U.S. Patent Number 11,980,507 B2 Elimelech discloses col. 4, lines 35-7, registering a patient marker, visible in a first, optical, modality, that is attached to a spinous process of a patient; a registration marker which is configured to be visible to both modalities, i.e., both optically and under fluoroscopic imaging; col. 5, lines 62-63, patient marker 38 is used as a fiducial for patient 30; col. 5, line 67 – col. 6, line 1, marker 38 is registered with the anatomy of a patient; figure 1; col. 6, lines 3-6, registration marker 40 is placed on the patient’s back, and is used to implement the registration of patient marker 38 with the anatomy of patient 30; col. 6, lines 42-45, registration marker 40, which is assumed to define a registration marker frame of reference 50, herein assumed to comprise an orthogonal set of xyz axes; col. 10, lines 89-49, analyzes the camera image of patient image 38 and registration marker 40; calculates the position and orientation of registration marker frame of reference 50, and the position and orientation of patient marker frame of reference 36; formulates a registration of the two frames of reference as a set of vectors Q describing the transformation of the registration marker frame of reference to the patient marker frame of reference. Cvetko et al., U.S. Patent Publication Number 20190348169 A1 Cvetko discloses paragraph 0075, sense the optical code affixed to a patient and a position of the optical code in the 3D space; paragraph 0078, registering the position of the inner layer of the patient in the 3D space by aligning the calculated position of the pattern of markers in the image data; paragraph 0050, one or more additional internal markers may be inserted within the patient; one or more additional internal markers may then be located and triangulated with the pattern of markers; paragraph 0080, using additional markers. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Nov 27, 2021
Application Filed
Oct 08, 2022
Non-Final Rejection — §103
Mar 31, 2023
Response Filed
Jun 08, 2023
Final Rejection — §103
Dec 13, 2023
Request for Continued Examination
Dec 18, 2023
Response after Non-Final Action
Apr 12, 2024
Non-Final Rejection — §103
Oct 18, 2024
Response Filed
Jan 22, 2025
Final Rejection — §103
Jul 28, 2025
Request for Continued Examination
Jul 30, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month