Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,301

ALIGNING MULTIPLE COORDINATE SYSTEMS FOR INFORMATON MODEL RENDERING

Final Rejection §103
Filed
Aug 01, 2023
Examiner
KOETH, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Xyz Reality Limited
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
331 granted / 429 resolved
+15.2% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments and amendments in the Amendment filed January 9, 2026 (herein “Amendment”), with respect to the objections to claims 3 and 23 have been fully considered and are persuasive. The objection to claims 3 and 23 has been withdrawn. Applicant’s arguments and amendments in the Amendment with respect to the rejection(s) of claim(s) 1, 18 and 23, and claims depending therefrom under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Salgian et al., US 2019/03477843. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1–5, 10, 12–14, 17–19, 22–23 are rejected under 35 U.S.C. 103 as being unpatentable over Mitchell, WO 2019/048866 A1 (herein “Mitchell”) in view of Chang, U.S. Patent Application Publication No. US 2021/0318149 A1 (herein “Chang”) in view of Salgian et al., U.S. Patent Application Publication No. US 2019/0347783 (herein “Salgian”). Regarding claims 1 and 23, with substantive differences between claims 1 and 23 indicated in curly brackets {}, with deficiencies of Mitchell noted in square brackets [], and with claim 1 as illustrative, Mitchell teaches {Claim 1: a computer-implemented method of displaying an augmented reality building information model within a head-mounted display of a headset, the method comprising (Mitchell ¶¶ 2 and 60, method of viewing a BIM (building information model) using an augmented reality HMD (head mounted display)) / Claim 23: A non-transitory computer-readable storage medium storing instructions which, when executed by at least one processor, cause the at least one processor to (Mitchell ¶266, storage device containing computer-executable machine code that can be processed by the processor 608 for controlling the operation of a headset))}: tracking the headset using a plurality of different positioning systems (Mitchell ¶¶61–63, headset tracking using a positional tracking system comprised of multiple sensors with respective sensor data used to determine the location and orientation of the headset, thus each sensor having their own respective position data gathering system (plurality of positioning systems)), each positioning system having a corresponding [different] coordinate system that further differs from an extrinsic coordinate system used by the building information model (Mitchell ¶63, intrinsic coordinate system used by the tracking system comprised of sensors, thus each sensor corresponding to the intrinsic coordinate system, where ¶¶ 218-221, and 331, teaches the intrinsic coordinate system must be converted into coordinates in the extrinsic real-world coordinates system or vice versa, and that the building information model uses the real-world/extrinsic coordinate system (thus, the intrinsic coordinate system is different from the extrinsic system used by the building information model)) and comprising one or more sensor devices coupled to the headset (Mitchell ¶62, headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone)), each positioning system determining a location and orientation of the headset over time within the corresponding coordinate system (Mitchell ¶¶62 and 92, location and orientation of the headset represented in headset tracking data based on sensor data received from sensors internal to the headset in the intrinsic coordinate system, where ¶¶246–248 teaches tracking the location and orientation continually (over time) to display in real-time the position); [obtaining a set of transformations that map between the co-ordinate systems of the plurality of positioning systems;] obtaining at least one calibrated transformation that maps between a first one of the co-ordinate systems associated with a corresponding first one of the plurality of positioning systems and the extrinsic coordinate system used by the building information model (Mitchell ¶¶218–221, a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system); obtaining a pose of the headset using [a second one of the plurality of positioning systems], the pose of the headset being defined within [a corresponding second one of the co-ordinate systems], the pose of the headset comprising a location and orientation of the headset (Mitchell ¶¶62, 92, 208 and 217 location and orientation of the headset represented in headset tracking data is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system, where ¶¶215–216 teaches the position being determined in three-dimensional coordinates in an intrinsic coordinate system); and using the set of transformations and the at least one calibrated transformation [to convert between the second one of the co-ordinate systems] used to define the obtained pose and the extrinsic co-ordinate system used by the building information model [by mapping the obtained pose between the second one of the co-ordinate systems, associated with the second one of the plurality of positioning systems, and the first one of the co-ordinate systems, associated with the first one of the plurality of positioning systems], and applying the at least one calibrated transformation (Mitchell ¶¶ 245–247, position of the setting-out tool (headset) in the intrinsic coordinate system is translated into the extrinsic, real-world coordinate system using the mathematical transformation) and rendering an augmented reality image of the building information model within the head-mounted display (Mitchell ¶¶276–277, 280, BIM model data is input for display by augmented reality glasses of the headset). Mitchell does not explicitly teach, but Chang teaches obtaining a set of transformations that map between the co-ordinate systems of the plurality of positioning systems (Chang ¶82, pairwise transformation matrices between different pairs of sensors are computed (obtaining a set) based on 3D points estimated by different sensors in their respective coordinate systems). Mitchell further does not explicitly teach where Salgian teaches different coordinate systems (plural) different from an extrinsic coordinate system used by the BIM (Salgian ¶¶31–32, a global localization system with its own coordinate system, and a local measurement system used by a set of second sensors, where both the global and local measurement systems are different from the 3D model/CAD Model/BIM coordinate system of a structure (building)), and a second one of the plurality of positioning systems with a corresponding second one of the co-ordinate systems (Salgian ¶¶31–32, local relative measurement in a coordinate system for a second sensor package). Mitchell still further does not explicitly teach where Salgian teaches to convert between the second one of the co-ordinate systems (Salgian ¶¶31–32, local measurement system measurements are placed in the global coordinate system, and then the corresponding location within 3D computer model of the building (BIM) is determined), and by mapping the obtained pose between the second one of the co-ordinate systems, associated with the second one of the plurality of positioning systems, and the first one of the co-ordinate systems, associated with the first one of the plurality of positioning systems (Salgian ¶31, first sensor package with global coordinate system communicating with second sensor package using local measurement system to align (convert between) a pose captured by the second sensor package (mapping the obtained pose) with the pose captured by the first sensor package). Therefore, taking the teachings of Mitchell and Chang together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the pairwise transformation operations disclosed in Chang at least because doing so would allow for simultaneous, and thus more efficient, calibration of multiple sensors of different types. See Chang ¶¶5 and 9–10. Further, taking the teachings of Mitchell and Salgian together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the multiple sensor systems with their own coordinate systems, and the conversion between these systems and a 3D building coordinate system disclosed in Salgian at least because doing so would allow for the ability to make high-precision and high-accuracy measurements of a physical location. See Salgian ¶47. Regarding claims 2 and 19, with claim 2 as illustrative, Mitchell teaches wherein the method further comprises: wherein the at least one calibrated transformation is used to align the building information model with at least one of the poses to render the augmented reality image (Mitchell ¶¶218–221, coordinates in the intrinsic coordinate system of the tracking system 100 are converted into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system, and where ¶290 teaches an alignment using the tracking data so that the rendered virtual image of the BIM model is in the correct location). Mitchell further teaches a pose of the headset (Mitchell ¶¶62, 92, 208 and 217 location and orientation of the headset represented in headset tracking data is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system). Mitchell does not explicitly teach, but Chang teaches transitioning the tracking of the headset between the plurality of positioning systems (Chang ¶79, multi-sensor calibrator receives a calibration signal based on a detected distance and orientation consistency, and then controls certain sensors to carry out the calibration (transitioning tracking)), wherein a first of the plurality of positioning systems tracks a first pose and a second of the plurality of positioning systems tracks a second pose (Chang ¶50, fig. 4C, each sensor detects four 3D points in their own respective coordinate systems which correspond to a pose of each sensor), and wherein one of the set of transformations is used to align the co-ordinate systems of the plurality of positioning systems (Chang ¶¶81–82, once individual sensors are calibrated, transformation matrices between the different pairs of sensors are computed based on the 3D points estimated by the different sensors in their respective coordinate systems). Therefore, taking the teachings of Mitchell and Chang together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the position tracking and transformation operations disclosed in Chang at least because doing so would allow for simultaneous, and thus more efficient, calibration of multiple sensors of different types. See Chang ¶¶5 and 9–10. Regarding claim 3, with deficiencies of the claim noted in square brackets [], Mitchell teaches wherein the plurality of positioning systems comprise at least a first positioning system with a [first] co-ordinate system and a second positioning system with a [second] co-ordinate system (Mitchell ¶63, intrinsic coordinate system used by the tracking system comprised of sensors, thus each sensor corresponding to the intrinsic coordinate system), [wherein the method further comprises transitioning the tracking of the headset between different ones of the plurality of positioning systems, said transitioning comprising: tracking] the headset (Mitchell ¶62, headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone)) [over time with the first positioning system], including performing a first mapping between a first pose in the first co-ordinate system and the extrinsic co-ordinate system used by the building information model using the at least one calibrated transformation (Mitchell ¶¶218–221, a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system); rendering an augmented reality image of the building information model within the head- mounted display using the first mapping (Mitchell ¶¶276–277, 280, BIM model data is input for display by augmented reality glasses of the headset, where ¶¶40–43 teach using the intrinsic-to-extrinsic mapping determined from the sensor data to generate the display); [transitioning to tracking] the headset (Mitchell ¶62, headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone)) [over time with the second positioning system], including performing a second mapping between a second pose in the second co-ordinate system and the extrinsic co-ordinate system used by the building information model (Mitchell ¶¶218–221, a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system); and rendering an augmented reality image of the building information model within the head- mounted display using the second mapping, wherein the second mapping uses one of the set of transformations to map between the first and second co-ordinate systems and the at least one calibrated transformation to align the location and orientation of the headset with the extrinsic coordinate system (Mitchell ¶¶276–277, 280, BIM model data is input for display by augmented reality glasses of the headset, where ¶¶40–43 teach using the intrinsic-to-extrinsic mapping determined from the sensor data to generate the display on the basis of the derived transformation). Mitchell teaches that the sensors use the internal coordinate system, and thus that one sensor would use a coordinate system, and a second sensor would use a coordinate system. However, Mitchell does not teach two separate coordinate systems “first” and “second.” However, Chang teaches a first coordinate system and a second coordinate system (Chang ¶82, pairwise transformation matrices between different pairs of sensors are computed (obtaining a set) based on 3D points estimated by different sensors in their respective coordinate systems). Further, as noted above, Mitchell does not explicitly teach, but Chang teaches: wherein the method further comprises transitioning the tracking of the headset between different ones of the plurality of positioning systems, said transitioning comprising: tracking over time with the first positioning system; transitioning to tracking over time with the second positioning system (Chang ¶79, multi-sensor calibrator receives a calibration signal based on a detected distance and orientation consistency (which would change “over time” thus tracking over time), for a particular sensor (thus tracking between different sensors/positioning systems including a first and second) and then controls certain sensors to carry out the calibration (transitioning tracking), ¶50, fig. 4C, each sensor detects four 3D points in their own respective coordinate systems which correspond to a pose of each sensor)). Therefore, taking the teachings of Mitchell and Chang together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the position tracking and transformation operations disclosed in Chang at least because doing so would allow for simultaneous, and thus more efficient, calibration of multiple sensors of different types. See Chang ¶¶5 and 9–10. Regarding claim 4, Mitchell teaches the plurality of positioning systems differ by one or more of: sensor devices used to track the headset; method of positioning; or location of use (in view of the claim language only requiring one of the listed items, Mitchell teaches sensor devices used to track the helmet in ¶62 teaching headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone)). Regarding claim 5, Mitchell teaches wherein a first positioning system within the plurality of positioning systems is configured to track the headset within a tracked volume using one or more position-tracking sensors at least coupled to the headset and one or more tracking devices for the tracked volume that are external to the headset within the construction site (Mitchell ¶¶61–62, headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone), and headset tracking using sensors external to the headset such as tracking sensors placed in corners of the venue/room (tracked volume)), wherein the at least one calibrated transformation is determined using sensor data obtained at control points for the first positioning system (Mitchell ¶¶239–243, fig. 4, control points in the construction site and base stations are used to emit/receive signals for determining locations of sensors in the intrinsic coordinate system, which are further converted to the extrinsic coordinate system). Regarding claim 10, with deficiencies of Mitchell noted in square brackets [], Mitchell teaches wherein the positioning systems in the plurality of positioning systems have different ranges and accuracies and include at least a first positioning system with a first range and a first accuracy, and a second positioning system with a second range and a second accuracy, the first range being [less than] the second range and the first accuracy being [greater than] the second accuracy (Mitchell ¶¶61–63, multiple tracking sensors are used for headset tracking using an intrinsic coordinate system, including optical sensors (cameras), proximity or location sensors (Wi-Fi, GPS), a gyroscope, and audio sensor, each having their own respective accuracies and ranges). While Mitchell teaches different sensors with different modalities that would almost certainly have different accuracies and ranges, if only due to the different modalities, nonetheless, Mitchell does not explicitly teach that the range of one sensor is less than the range of another sensor, or that the accuracy of the same one sensor is greater than the accuracy of the second sensor. However, a PHOSITA in view of Mitchell’s teachings of a wide variety of different sensors that would inherently have different accuracies and ranges (if just by way of the different modalities) would be motivated to make an obvious choice of selecting sensors one with a range less than the other, but with greater accuracy, as doing so would merely be a matter of design choice. See MPEP §2144.04(VI)(C). Regarding claim 12, Mitchell teaches wherein the one or more tracking devices of the first positioning system emit one or more electromagnetic signals, and at least one of the one or more position-tracking sensors is configured to determine a property of the electromagnetic signals that is indicative of an angular distance from the one or more tracking devices (Mitchell ¶¶33–35, 64–65, sensors sensing directional electromagnetic radiation emitted from corresponding beacons that is modulated in such a manner as to indicate the bearing or angular distance of the source, where the sensor detects or measures the properties of the incident signals, including the angular distance). Regarding claim 13, Mitchell teaches the method further comprising, prior to tracking the headset, calibrating a tracked volume of a first positioning system in the plurality of positioning systems, wherein the calibrating includes (Mitchell ¶¶31–34, 42, signals emitted by beacons on the construction site are sensed by a headset for position tracking, and a transformation is derived between different coordinate systems (calibrating)): receiving control point location data representing the positions of a plurality of control points at the construction site in the extrinsic coordinate system (Mitchell ¶42, coordinates of the control points derived from sensor data received from at least one sensor using the position-tracking system); receiving control point tracking data representing the positions of the control points in an intrinsic coordinate system used by the first positioning system (Mitchell ¶42, coordinates (positions) of the control points in the intrinsic coordinate system are derived from the sensor data received from at least one sensor using the position-tracking system); and relating the positions of the control points in the intrinsic and extrinsic coordinate systems to derive the at least one calibrated transformation (Mitchell ¶49, converting location data between the intrinsic coordinate system and the extrinsic coordinate system on the basis of a transformation derived by relating the coordinates of one or more control points of known location in the extrinsic coordinate system to their corresponding coordinates in the intrinsic coordinate system using the position- tracking system). Mitchell does not explicitly teach but Chang teaches wherein the set of transformations map between the intrinsic co-ordinate system used by the first positioning system and one or more intrinsic coordinate systems used by other positioning systems within the plurality of positioning systems (Chang ¶82, pairwise transformation matrices between different pairs of sensors are computed (obtaining a set) based on 3D points estimated by different sensors in their respective coordinate systems). Therefore, taking the teachings of Mitchell and Chang together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the pairwise transformation operations disclosed in Chang at least because doing so would allow for simultaneous, and thus more efficient, calibration of multiple sensors of different types. See Chang ¶¶5 and 9–10. Regarding claim 14, Mitchell teaches the method further comprising: determining a first set of points in the extrinsic coordinate system by applying the at least one calibrated transformation to a set of points in a coordinate system for a first positioning system within the plurality of positioning systems (Mitchell ¶95, the positional tracking system may be calibrated to the extrinsic, real-world coordinate system by positioning the tool sequentially at two, three or more control points within the tracked volume that have known locations in the real-world coordinate system and determining the position of the tool in the intrinsic coordinate system at each control point, and a coordinate conversion engine for converting the coordinates of the setting-out tool in the intrinsic coordinate system, as determined by the tracking system, to corresponding coordinates in an extrinsic, real-world coordinate system based on a transformation for converting between the two coordinate systems); determining a second set of points in the extrinsic coordinate system determined by applying the at least one calibrated transformation and one of the set of transformations to a set of points in a coordinate system for a second positioning system within the plurality of positioning systems (Mitchell ¶95 and ¶291 teaching that eye-tracking devices generate display position data indicative of the position of the hard hat 600 relative to the user's head, and where a coordinate conversion engine for converting the coordinates of the setting-out tool in the intrinsic coordinate system, to corresponding coordinates in an extrinsic, real-world coordinate system based on a transformation for converting between the two coordinate systems); and fusing the two sets of points in the extrinsic co-ordinate system to determine a single set of points in the extrinsic co-ordinate system for the rendering of the building information model (Mitchell ¶¶292–293, headset tracking data is fused with display position data generated by the eye-tracking devices and display data representing the physical/optical properties of the augmented reality glasses to produce a virtual image of the BIM model, and by virtue of the transformation of coordinates between an intrinsic, tracked coordinate system of the positional tracking system and an extrinsic real-world coordinate system, the BIM model can be displayed to the worker in its proper context). Regarding claim 17, where the claim only requires two “at least two” from the listed items, Mitchell teaches wherein the plurality of positioning systems includes at least two selected from: a radio-frequency identifier (RFID) tracking system comprising at least one RFID sensor coupled to the headset; an inside-out positioning system comprising one or more signal-emitting beacon devices external to the headset and one or more receiving sensors coupled to the headset (Mitchell ¶¶64–65, inside-out positional tracking system with sensors provided on objects to be tracked within a tracked volume measuring incident signals emitted from electromagnetic radiation sources (beacon)); a global positioning system; a positioning system implemented using a wireless network and one or more network receivers coupled to the headset (Mitchell ¶62, tracking sensors as wireless sensors such as wi-fi on a headset); or a camera-based simultaneous location and mapping (SLAM) system. Regarding claim 18, with deficiencies of Mitchell noted in square brackets [], Mitchell teaches a headset for use in construction, the headset comprising (Mitchell fig. 12 reproduced below for convenience, ¶¶258–263): PNG media_image1.png 726 681 media_image1.png Greyscale an article of headwear (Mitchell ¶257, hard hat 600); sensor devices for a plurality of different positioning systems (Mitchell ¶260, sensors 602a–n, where ¶¶61–63 teach the headset tracking using a positional tracking system comprised of multiple sensors with respective sensor data used to determine the location and orientation of the headset, thus each sensor having their own respective position data gathering system (plurality of positioning systems)), each positioning system having a corresponding [different] coordinate system that further differs from an extrinsic coordinate system used by the building information model (Mitchell ¶63, intrinsic coordinate system used by the tracking system comprised of sensors, thus each sensor corresponding to the intrinsic coordinate system, where ¶¶ 218-221, and 331, teaches the intrinsic coordinate system must be converted into coordinates in the extrinsic real-world coordinates system or vice versa, and that the building information model uses the real-world/extrinsic coordinate system (thus, the intrinsic coordinate system is different from the extrinsic system used by the building information model)), each positioning system determining a location and orientation of the headset over time within the corresponding coordinate system (Mitchell ¶¶62 and 92, location and orientation of the headset represented in headset tracking data based on sensor data received from sensors internal to the headset in the intrinsic coordinate system, where ¶¶246–248 teaches tracking the location and orientation continually (over time) to display in real-time the position); a head-mounted display for displaying an augmented reality image of the building information model (Mitchell ¶¶260 and 280, augmented reality glasses 700 which have displayed thereon a rendered BIM model); and an electronic control system comprising at least one processor configured to (Mitchell ¶266, storage device of the hard hat 600 containing computer-executable machine code that can be processed by the processor 608 for controlling the operation of the hard hat 600): [obtain a set of transformations that map between the coordinate systems of the plurality of positioning systems;] obtain at least one calibrated transformation that maps between a first one of the coordinate systems associated with a corresponding first one of the plurality of positioning systems and the extrinsic coordinate system used by the building information model (Mitchell ¶¶218–221, a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system); obtain a pose of the headset using [a second one of the plurality of positioning systems], the pose of the headset being defined within [a corresponding second one of the co-ordinate systems], the pose of the headset comprising a location and orientation of the headset (Mitchell ¶¶62, 92, 208 and 217 location and orientation of the headset represented in headset tracking data is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system, where ¶¶215–216 teaches the position being determined in three-dimensional coordinates in an intrinsic coordinate system); and use the set of transformations and the at least one calibrated transformation [to convert between the second one of the co-ordinate systems] used to define the obtained pose and the extrinsic coordinate system used by the building information model (Mitchell ¶¶ 245–247, position of the setting-out tool (headset) in the intrinsic coordinate system is translated into the extrinsic, real-world coordinate system using the mathematical transformation) [by mapping the obtained pose between the second one of the coordinate systems, associated with the second one of the plurality of positioning systems, and the first one of the coordinate systems, associated with the first one of the plurality of positioning systems] and applying the at least one calibrated transformation (Mitchell ¶¶ 245–247, position of the setting-out tool (headset) in the intrinsic coordinate system is translated into the extrinsic, real-world coordinate system using the mathematical transformation) to render an augmented reality image of the building information model relative to the pose of the article of headwear on the head-mounted display (Mitchell ¶¶276–277, 280, BIM model data is input for display by augmented reality glasses of the headset). Mitchell does not explicitly teach, but Chang teaches obtain a set of transformations that map between the co-ordinate systems of the plurality positioning systems (Chang ¶82, pairwise transformation matrices between different pairs of sensors are computed (obtaining a set) based on 3D points estimated by different sensors in their respective coordinate systems). Mitchell further does not explicitly teach where Salgian teaches different coordinate systems (plural) different from an extrinsic coordinate system used by the BIM (Salgian ¶¶31–32, a global localization system with its own coordinate system, and a local measurement system used by a set of second sensors, where both the global and local measurement systems are different from the 3D model/CAD Model/BIM coordinate system of a structure (building)), and a second one of the plurality of positioning systems with a corresponding second one of the co-ordinate systems (Salgian ¶¶31–32, local relative measurement in a coordinate system for a second sensor package). Mitchell still further does not explicitly teach where Salgian teaches to convert between the second one of the co-ordinate systems (Salgian ¶¶31–32, local measurement system measurements are placed in the global coordinate system, and then the corresponding location within 3D computer model of the building (BIM) is determined), and by mapping the obtained pose between the second one of the co-ordinate systems, associated with the second one of the plurality of positioning systems, and the first one of the co-ordinate systems, associated with the first one of the plurality of positioning systems (Salgian ¶31, first sensor package with global coordinate system communicating with second sensor package using local measurement system to align (convert between) a pose captured by the second sensor package (mapping the obtained pose) with the pose captured by the first sensor package). Therefore, taking the teachings of Mitchell and Chang together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the pairwise transformation operations disclosed in Chang at least because doing so would allow for simultaneous, and thus more efficient, calibration of multiple sensors of different types. See Chang ¶¶5 and 9–10. Further, taking the teachings of Mitchell and Salgian together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the augmented reality model and display processing disclosed in Mitchell with the multiple sensor systems with their own coordinate systems, and the conversion between these systems and a 3D building coordinate system disclosed in Salgian at least because doing so would allow for the ability to make high-precision and high-accuracy measurements of a physical location. See Salgian ¶47. Regarding claim 22, Mitchell teaches wherein the article of headwear comprises a hard hat (Mitchell ¶257, hard hat 600). Claims 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Mitchell in view of Chang in view of Salgian, and further in view of Webb et al., EP 2354893 A1 (herein “Webb”). Regarding claim 6, with deficiencies of Mitchell noted in square brackets [], Mitchell teaches wherein the method further comprises: determining a first pose of the headset using the first positioning system (Mitchell ¶¶62, 92, 208 and 217 location and orientation of the headset represented in headset tracking data is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system); converting between the coordinate system for the first positioning system and the extrinsic coordinate system used by the building information model using the at least one calibrated transformation (Mitchell ¶¶ 245–247, position of the setting-out tool (headset) in the intrinsic coordinate system is translated into the extrinsic, real-world coordinate system using the mathematical transformation) and rendering an augmented reality image of the building information model within the head-mounted display relative to the first pose of the headset (Mitchell ¶¶276–277, 280, BIM model data is input for display by augmented reality glasses of the headset); [responsive to a determination that the] headset (Mitchell fig. 12, headset) [is not tracked by the first positioning system] determining a second pose of the headset using a second positioning system within the plurality positioning systems (Mitchell ¶¶62, 92, 208 and 217 location and orientation of the headset represented in headset tracking data is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system), the second positioning system being configured to track the headset using one or more camera devices at least coupled to the headset (Mitchell ¶62, headset tracking using sensors internal (coupled) to the headset such as an optical sensor (e.g. camera, a rear facing camera and a front facing camera)); and converting between the coordinate system for the second positioning system and the extrinsic coordinate system used by the building information model using the set of transformations and the at least one calibrated transformation (Mitchell ¶¶62 and 92, location and orientation of the headset represented in headset tracking data based on sensor data received from sensors internal to the headset in the intrinsic coordinate system, where ¶¶218–221 teach a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system) and rendering an augmented reality image of the building information model within the head-mounted display relative to the second pose of the headset (Mitchell ¶¶218–221, coordinates in the intrinsic coordinate system of the tracking system 100 are converted into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system, and where ¶290 teaches an alignment using the tracking data so that the rendered virtual image of the BIM model is in the correct location). Mitchell as modified by Chang does not explicitly teach, but Webb teaches responsive to a determination that the is not tracked by the first positioning system (Webb ¶¶48–49, when hardware motion sensor is in an error state and a reset signal is issued for the hardware sensor, it is forced into a condition of reading a zero motion value (no longer tracking the portable data tracking apparatus), and a different sensor value from the video motion detector is used instead). Therefore, taking the teachings of Mitchell as modified by Chang and Webb together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the headset processing of Mitchell to include responsiveness to a sensor no longer in a tracking state as disclosed in Webb at least because doing so would allow for the cheap and reliable use of accelerometers for motion tracking while providing a way to avoid the user experience problems that can occur when an accelerometer’s measurement drifts. See Webb ¶¶3–10. Regarding claim 9, with deficiencies of Mitchell noted in square brackets [], Mitchell teaches the method further comprising: [determining that the] headset (Mitchell fig. 12, headset) [is no longer being tracked by a first positioning system within the plurality of positioning systems]; and [responsive to a determination that the] headset (Mitchell fig. 12, headset) [is no longer being tracked by the first positioning system,] rendering the augmented reality image of the building information model within the head-mounted display relative to a pose of the headset as determined using a second positioning system within the plurality of positioning systems (Mitchell ¶¶218–221, coordinates in the intrinsic coordinate system of the tracking system 100 are converted into coordinates in the extrinsic real-world coordinates system or vice versa, where ¶231 teaches the building information model uses the real-world/extrinsic coordinate system, and where ¶290 teaches an alignment using the tracking data so that the rendered virtual image of the BIM model is in the correct location, where ¶¶62, 92, 208 and 217 teach location and orientation of the headset represented in headset tracking data (pose) is determined from sensor data received from sensors internal to the headset in the intrinsic coordinate system). Mitchell as modified by Chang does not explicitly teach, but Webb teaches determining that the is no longer being tracked by a first positioning system within the plurality of positioning systems, and responsive to a determination that the is no longer being tracked by the first positioning system (Webb ¶¶48–49, when hardware motion sensor is in an error state and a reset signal is issued for the hardware sensor (determining), the response is to force the hardware sensor into a condition of reading a zero motion value (no longer tracking the portable data tracking apparatus), and a different sensor value from the video motion detector is used instead). Therefore, taking the teachings of Mitchell as modified by Chang and Webb together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the headset processing of Mitchell to include responsiveness to a sensor no longer in a tracking state as disclosed in Webb at least because doing so would allow for the cheap and reliable use of accelerometers for motion tracking while providing a way to avoid the user experience problems that can occur when an accelerometer’s measurement drifts. See Webb ¶¶3–10. Claims 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mitchell in view of Chang in view of Salgian, and further in view of Taylor et al., US 2019/0388781 A1 (herein “Taylor”). Regarding claim 11, Mitchell as modified by Chang teaches the second positioning system (Mitchell ¶¶61–63, headset tracking using a positional tracking system comprised of multiple sensors with respective sensor data used to determine the location and orientation of the headset, thus each sensor having their own respective position data gathering system (plurality of positioning systems), thus a second sensor with a second positioning system), but does not teach where Taylor does teach comprises a simultaneous location and mapping (SLAM) system that receives image data from one or more camera devices (Taylor Abstract, system generating a SLAM map as part of a system that determines movement of feature points in spatial imaging system including cameras). Therefore, taking the teachings of Mitchell as modified by Chang and Taylor together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the headset processing of Mitchell to include a SLAM system processing as disclosed in Taylor at least because doing so would allow for generating SLAM maps which improve navigation in an augmented reality system. See Taylor ¶2. Regarding claim 20, Mitchell teaches wherein the sensor devices comprise: one or more position-tracking sensors mounted in relation to the article of headwear that are responsive to one or more electromagnetic signals emitted by a first positioning system within the plurality of positioning systems (Mitchell ¶¶33–35, 64–65, sensors sensing directional electromagnetic radiation emitted from corresponding beacons that is modulated in such a manner as to indicate the bearing or angular distance of the source, where the sensor detects or measures the properties of the incident signals, including the angular distance), the first positioning system comprising one or more tracking devices for implementing a tracked volume that are external to the headset within the construction site (Mitchell ¶¶61–62, headset tracking using sensors internal (coupled) to the headset such as proximity or location sensors (e.g. near field communication, GPS, Bluetooth, Wi-Fi), an optical sensor (e.g. camera, a rear facing camera and a front facing camera), an orientation sensor (e.g. gyroscope), an audio sensor (e.g. a microphone), and headset tracking using sensors external to the headset such as tracking sensors placed in corners of the venue/room (tracked volume)). Mitchell as modified by Chang does not explicitly teach but Taylor teaches one or more camera devices mounted in relation to the article of headwear to generate data for use by a second image-based positioning system within the plurality of positioning systems (Taylor ¶¶ 61–62, figs. 10–11, spatial imaging system including cameras mounted on multiple users’ headwear taking images of the other headwear to generate a SLAM map). Therefore, taking the teachings of Mitchell as modified by Chang and Taylor together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the headset processing of Mitchell to include a SLAM system processing as disclosed in Taylor at least because doing so would allow for generating SLAM maps which improve navigation in an augmented reality system. See Taylor ¶2. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Mitchell in view of Chang, and further in view of Braley et al., US 2022/0179056 A1 (herein “Braley”). Regarding claim 15, Mitchell teaches the method further comprising: measuring a position of a plurality of defined points with each of the plurality of positioning systems; and comparing the measured positions to calibrate the set of transformations (Mitchell ¶¶218–219, in calibrating the positional tracking system to real-world coordinates, the positions of the control points are known (defined) in a real-world coordinate system, and tracking system is calibrated to the extrinsic coordinate system by manually moving a calibration tool 250 comprising a single sensor 202, as shown in FIG. 1, to each control point 10a, 10b, 10c in turn, as illustrated in FIG. 4, and determining the locations of the control points 10a, 10b, 10c in the intrinsic coordinate system defined by the positional tracking system 100. Once the locations of the control points 10a, 10b, 10c are known in both the intrinsic and extrinsic real-world coordinates systems, a mathematical transformation can be derived for converting coordinates in the intrinsic coordinate system of the tracking system 100 into coordinates in the extrinsic real-world coordinates system or vice versa). Mitchell as modified by Chang does not, but Braley teaches wherein comparing the measured positions comprises optimising a non-linear function representing a difference between positions of the one or more defined points as obtained from two or more coordinate systems of two or more different positioning systems (Braley ¶34, coordinate values provided from sensors (one positioning system) are compared against coordinate values from a model (different positioning system) to find the difference, in a non-linear optimization algorithm to determine a set of calibration parameters). Therefore, taking the teachings of Mitchell as modified by Chang and Braley together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the headset processing of Mitchell to include non-linear optimization from different coordinate values as disclosed in Braley at least because doing so would allow for automatically and effectively determining updated calibration parameter values for an arbitrary number of sensors and thereafter computing an accurate mathematical modeling of the sensor behaviors which accounts for, or compensates for, any changes or displacements that have taken place since each of the sensor was calibrated last time. See Braley ¶7. Allowable Subject Matter Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The closest prior art of record includes the combination of Mitchell and Chang as set forth above regarding claims 1, 5 and 6 from which claim 7 depends. Together with all of the limitations recited in claims 1, 5, and 6, the additional limitations of claim 7 directed towards calibrated transformations and building information model rendering for two different construction sites is not disclosed in the cited art of record, in any combination obvious to a PHOSITA. Accordingly, claim 7, including all of the limitations from intervening claims 1, 5 and 6, is patentably distinguishable over the cited art of record. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHELLE M. KOETH Primary Examiner Art Unit 2671 /MICHELLE M KOETH/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Aug 01, 2023
Application Filed
Oct 09, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Feb 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586221
METHOD AND APPARATUS FOR ESTIMATING DEPTH INFORMATION OF IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579651
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
2y 5m to grant Granted Mar 17, 2026
Patent 12567241
Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
2y 5m to grant Granted Mar 03, 2026
Patent 12567177
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566493
METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.7%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month