Prosecution Insights
Last updated: April 19, 2026
Application No. 18/712,390

MATCHING A BUILDING INFORMATION MODEL

Non-Final OA §103
Filed
May 22, 2024
Examiner
CHIN, MICHELLE
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Xyz Reality Limited
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
540 granted / 634 resolved
+23.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
70.6%
+30.6% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 634 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 05/22/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 7. Claim(s) 1- 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salgian et al. (US 2019/0347783 A1) in view of Johnson et al. (US 2015/0199847 A1). 8. With reference to claim 1, Salgian teaches A headset for use in construction at a construction site, the headset comprising: an article of headwear; (“Global localization performed by the GLS 102 works by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the area. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. …the user starts at a designated surveyed location(s) with a known visual pattern. The system is able to initialize its global position and heading based on this process. Then as the user moves around the site the system can localize his position to approximately 5-15 cm, or about 10 cm, accuracy. In some embodiments, the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0026-0027]) Salgian also teaches a first set of sensor devices for a positioning system, the first set of sensor devices operating to track the headset at the construction site; (“Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. … the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0026-0027]) Salgian further teaches a second set of sensors for point measurements within the construction site using received electromagnetic signals; (“Local relative measurements performed by the LMS 114 are accomplished using a second sensor package 116 that yields higher resolution by working with a narrower field of view. In some embodiments, the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements.” [0051]) Salgian teaches a head-mounted display for displaying a virtual image of a building information model (BIM) representing the construction site; (“Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. … the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0026-0027] “the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4. The handheld tablet 402 may also include handles 414 and a communication cable 412 (i.e., such as a USB cable) for transferring data. Local reconstruction of a scene of a structure can be placed in global coordinates with the accuracy of the global localization (i.e., about 5-15 cm level). If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0032] “performing visual inspections by displaying 3D computer models of the structure (e.g., a BIM) overlaid on the video/display image (122, 124); performing local inspections to determine structural information such as, but not limited to, number of structural support elements, diameter/thickness of the support elements, pitch between each support element, and tensile markings within a 0.1 meter-10 meter section with or without a 3D computer models of the structure to check against (126, 128); performing worksite localization to localize the user within 5-15 cm across the building construction site or within a large structure such as a ship, for example, relative to markers laid out throughout the site (106, 108);” [0043]) Salgian also teaches an electronic control system comprising at least one processor (“the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930.” [0057]) Salgian further teaches track a pose of the headset within the construction site using the positioning system based on data from the first set of sensor devices, the pose of the headset being used to determine the virtual image to be displayed on the head-mounted display, the building information model being aligned with the pose using a calibrated transformation; (“Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. … the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104. …the GLS 102 may further use functionality from the 3D model alignment system 254 which uses the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected and extract a local area of the 3D computer model. The 3D model alignment system 254 will then align observations and/or information obtained from the first sensor package 104 to the local area of the model 3D computer model of the structure extracted.” [0026-0028] “the first sensor package 104 including helmet mounted cameras, sensors, and AR display may handshake with the second sensor package 116 including a tablet to align a pose captured by the second sensor package with the pose captured by the first sensor package. For example, the pose captured by the first or second sensor packages may be a six (6) degrees of freedom (6DOF) pose. This is achieved by sending a number of salient features (image feature descriptors and the corresponding 3D points) from the first sensor package 104 to the second sensor package 116 (left diagram in FIG. 5). The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. This handshake procedure is initiated by the user (e.g., by pressing a button on the tablet or associated with the first sensor package) before recording a sequence for local inspection, to ensure that the second sensor poses are aligned to the global reference frame. … the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4. The handheld tablet 402 may also include handles 414 and a communication cable 412 (i.e., such as a USB cable) for transferring data. Local reconstruction of a scene of a structure can be placed in global coordinates with the accuracy of the global localization (i.e., about 5-15 cm level). If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0031-0032]) Salgian teaches obtain point measurements of locations within the construction site using the second set of sensors, the point measurements comprising points within a measurement coordinate system defined by the second set of sensors, the measurement coordinate system being mapped to a positioning system coordinate system for the pose of the headset using a defined further transformation; (“those measurements must be placed in the global coordinate system. To facilitate this transfer of information between the GLS 102 and the LMS 114, In some embodiments, the global localization system (GLS) 102 is communicatively coupled to the local measurement system (LMS) 114 using wired or wireless communication protocols, as depicted by 115 in FIG. 1. In some embodiments, a handshaking process is performed between elements of the GLS 102 and the LMS 114 to align information between the two systems as described below with respect to in FIG. 5 in more detail. Specifically, as shown in FIG. 5, in some embodiments, the first sensor package 104 including helmet mounted cameras, sensors, and AR display may handshake with the second sensor package 116 including a tablet to align a pose captured by the second sensor package with the pose captured by the first sensor package. For example, the pose captured by the first or second sensor packages may be a six (6) degrees of freedom (6DOF) pose. This is achieved by sending a number of salient features (image feature descriptors and the corresponding 3D points) from the first sensor package 104 to the second sensor package 116 (left diagram in FIG. 5). The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. This handshake procedure is initiated by the user (e.g., by pressing a button on the tablet or associated with the first sensor package) before recording a sequence for local inspection, to ensure that the second sensor poses are aligned to the global reference frame. Local relative measurements performed by the LMS 114 are accomplished using a second sensor package 116 that yields higher resolution by working with a narrower field of view. In some embodiments, the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0031-0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements.” [0051]) Salgian also teaches align the point measurements with the building information model using the calibrated transformation such that the point measurements and building information model reside within a common coordinate system; (“the GLS 102 may further use functionality from the 3D model alignment system 254 which uses the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected and extract a local area of the 3D computer model. The 3D model alignment system 254 will then align observations and/or information obtained from the first sensor package 104 to the local area of the model 3D computer model of the structure extracted.” [0028] “the first sensor package 104 including helmet mounted cameras, sensors, and AR display may handshake with the second sensor package 116 including a tablet to align a pose captured by the second sensor package with the pose captured by the first sensor package. For example, the pose captured by the first or second sensor packages may be a six (6) degrees of freedom (6DOF) pose. This is achieved by sending a number of salient features (image feature descriptors and the corresponding 3D points) from the first sensor package 104 to the second sensor package 116 (left diagram in FIG. 5). The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. This handshake procedure is initiated by the user (e.g., by pressing a button on the tablet or associated with the first sensor package) before recording a sequence for local inspection, to ensure that the second sensor poses are aligned to the global reference frame. … the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4. The handheld tablet 402 may also include handles 414 and a communication cable 412 (i.e., such as a USB cable) for transferring data. Local reconstruction of a scene of a structure can be placed in global coordinates with the accuracy of the global localization (i.e., about 5-15 cm level). If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0031-0032]) Salgian further teaches compare the point measurements with the building information model to determine whether the construction site matches the building information model, (“embodiments consistent with the present disclosure generally performs computer aided inspection by localizing user to within a local area (e.g. within 10 cm), using the localization to index to a corresponding position in a 3D computer model and extract the relevant parts of the model (for instance a CAD 3D representation such as a BIM model), match observations from sensors to model, and finally make fine level measurements (at mm scale) and comparisons to the 3D model.” [0023] “the second sensor package 116 is configured to obtain fine level measurements (mm level measurements) and information about the structure, and the model recognition system 256 compare is configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model of the structure. In some embodiments, the model recognition system 256 is further configured to generate a compliance report including discrepancies determined between the measurements and information obtained about the structure from the second sensor package and the 3D computer model of the structure.” [0033]) Salgian teaches the point measurements are obtained iteratively and dynamically as the headset moves around the construction site. ((“Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. … the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0026-0027] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements.” [0051]) PNG media_image1.png 369 540 media_image1.png Greyscale Salgian does not explicitly teach the second set of sensors located in the headset. This is what Johnson teaches (“The head mountable display system 200 may also include a headset pose sensor system 207 used to determine the orientation and position or pose of the head of the operator. For example, the headset pose sensor system 207 may include a plurality of headset pose sensors 208 that generate signals that the controller 21 may use to determine the pose of the operator's head. … A mapping system 210 may be positioned on head mountable display device 201 to scan the area in proximity to the operator (i.e., within the operator cab 15) to determine the extent to which the operator's view is obstructed and to assist in positioning images generated by the head mountable display system 200. The mapping system 210 may also be combined with the pose sensing system 24 of machine 10 to operate as an alternate headset pose sensor system 207. The mapping system 210 may include one or more mapping sensors 211 that may scan the area adjacent the operator to gather information defining the interior of the operator cab.” [0037-0038]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Johnson into Salgian, in order to provide a hand-free solution and increase the visibility around the machine. PNG media_image2.png 577 444 media_image2.png Greyscale 9. With reference to claim 2, Salgian teaches the second set of sensors comprise a laser device to measure a point from the headset. (“the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0032] “a 3D point cloud representation of the structure is generated using fine level measurements and information about the structure obtained from high-resolution sensors configured to obtain mm level measurements. The method proceeds to 708 where objects of interest and detected in the 3d point cloud representation of the structure, and at 710, measurements and information of said objects of interest are obtained using the high-resolution sensors. At 712, the 3d point cloud representation of the structure is aligned to the 3D computer model received.” [0046]) 10. With reference to claim 3, Salgian teaches the at least one processor is configured to: obtain the further transformation; use the further transformation to map between the measurement coordinate system for the point measurements and the positioning system coordinate system to track the headset; and use the calibrated transformation to map between the positioning system coordinate system and a coordinate system for the building information model, whereby the points and the building information model are mapped to the common coordinate system for comparison. (“embodiments consistent with the present disclosure generally performs computer aided inspection by localizing user to within a local area (e.g. within 10 cm), using the localization to index to a corresponding position in a 3D computer model and extract the relevant parts of the model (for instance a CAD 3D representation such as a BIM model), match observations from sensors to model, and finally make fine level measurements (at mm scale) and comparisons to the 3D model.” [0023] “the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “the first sensor package 104 including helmet mounted cameras, sensors, and AR display may handshake with the second sensor package 116 including a tablet to align a pose captured by the second sensor package with the pose captured by the first sensor package. For example, the pose captured by the first or second sensor packages may be a six (6) degrees of freedom (6DOF) pose. This is achieved by sending a number of salient features (image feature descriptors and the corresponding 3D points) from the first sensor package 104 to the second sensor package 116 (left diagram in FIG. 5). The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. This handshake procedure is initiated by the user (e.g., by pressing a button on the tablet or associated with the first sensor package) before recording a sequence for local inspection, to ensure that the second sensor poses are aligned to the global reference frame. … the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4. The handheld tablet 402 may also include handles 414 and a communication cable 412 (i.e., such as a USB cable) for transferring data. Local reconstruction of a scene of a structure can be placed in global coordinates with the accuracy of the global localization (i.e., about 5-15 cm level). If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses. … the second sensor package 116 is configured to obtain fine level measurements (mm level measurements) and information about the structure, and the model recognition system 256 compare is configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model of the structure. In some embodiments, the model recognition system 256 is further configured to generate a compliance report including discrepancies determined between the measurements and information obtained about the structure from the second sensor package and the 3D computer model of the structure.” [0031-0033] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930.” [0057]) 11. With reference to claim 4, Salgian teaches the at least one processor is configured to: fit a surface or mesh to a plurality of points within the point measurements; and compare the surface or mesh to at least a portion of the building information model. (“embodiments consistent with the present disclosure generally performs computer aided inspection by localizing user to within a local area (e.g. within 10 cm), using the localization to index to a corresponding position in a 3D computer model and extract the relevant parts of the model (for instance a CAD 3D representation such as a BIM model), match observations from sensors to model, and finally make fine level measurements (at mm scale) and comparisons to the 3D model.” [0023] “the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930.” [0057]) 12. With reference to claim 5, Salgian teaches the surface or mesh is fitted to represent one or more of a wall, a ceiling, or a door. (“Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a worksite (e.g., a construction site, a ship or ship building yard, railroad tracks, etc.) to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The GLS 102 is then able to use the tracking information localize the user to within that same level of precision in a model coordinate system associated with the 3D computer model of the structure/site being inspected at 108 (i.e., index into the model at a corresponding location).” [0026] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052]) 13. With reference to claim 6, Salgian teaches the at least one processor is configured to: indicate to a user via the head-mounted display a match between the surface or mesh and the portion of the building information model. (“embodiments consistent with the present disclosure generally performs computer aided inspection by localizing user to within a local area (e.g. within 10 cm), using the localization to index to a corresponding position in a 3D computer model and extract the relevant parts of the model (for instance a CAD 3D representation such as a BIM model), match observations from sensors to model, and finally make fine level measurements (at mm scale) and comparisons to the 3D model.” [0023] “the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930.” [0057]) 14. With reference to claim 7, Salgian teaches the at least one processor is configured to: receive an input from the user indicating an approval of an indicated match; and responsive to the input, fix the surface or mesh as part of a set of calibration references for use in deriving the calibrated transformation. (“the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0032] “The method 700 starts at 702 and proceeds to 704 where a 3D computer model of a structure is received from a first sensor package. At 706, a 3D point cloud representation of the structure is generated using fine level measurements and information about the structure obtained from high-resolution sensors configured to obtain mm level measurements. The method proceeds to 708 where objects of interest and detected in the 3d point cloud representation of the structure, and at 710, measurements and information of said objects of interest are obtained using the high-resolution sensors. At 712, the 3d point cloud representation of the structure is aligned to the 3D computer model received. The method proceeds to 714 where discrepancies are detected between the objects of interest and the 3d computer model received as described above with respect to 612 of FIG. 6. In some embodiments, detecting the discrepancies further includes generating a compliance report including the discrepancies determined. …The first capability enables the users to walk around a large worksite and locate themselves within the site at an accuracy of about 10 centimeters and overlay AR icons on the wearable display. Doing localization to 10 cm precision, will also enable the system to automatically match the model to the high-precision tablet video without user intervention. The second capability enables the users to make high-precision and high-accuracy measurements (millimeter level).” [0046-0047] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930. Computer system 900 further includes a network interface 940 coupled to I/O interface 930, and one or more input/output devices 950, such as cursor control device 960, keyboard 970, and display(s) 980. In various embodiments, any of the components may be utilized by the system to receive user input described above.” [0057]) 15. With reference to claim 8, Salgian teaches identify a plurality of points within the positioning system coordinate system that are within the fixed surface or mesh; determine an estimated location for the fixed surface or mesh; compare the estimated location for the fixed surface or mesh with a corresponding surface or mesh within the building information model; and use the result of the comparison to update or initialise the calibrated transformation. (“the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. This handshake procedure is initiated by the user (e.g., by pressing a button on the tablet or associated with the first sensor package) before recording a sequence for local inspection, to ensure that the second sensor poses are aligned to the global reference frame. … If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0031-0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930.” [0057]) 16. With reference to claim 9, Salgian teaches the at least one processor is configured to: receive an input from the user indicating an approval of an indicated mismatch; and responsive to the input, update the portion of the building information model based on a measured position of the surface or mesh. (“the first sensor package 104 used for global localization is a wearable sensor package with one or more wide field of view cameras, IMUs, barometer, altimeter, magnetometer, GPS, etc. In some embodiments, global localization (also referred to as worksite localization) is accomplished by information collected from a helmet worn compact sensor package and a belt-worn processor as shown in FIG. 3. Specifically, FIG. 3 shows a user wearing elements of the first sensor package 104 used for global localization, including a helmet mounted stereo camera and IMU sensor 302, a Helmet Mounted Display (HMD) augmented reality (AR) goggles/glasses 304, a belt mounted battery pack 306, and a processor 308 which links all the other elements of the first sensor package 104.” [0027] “If a blueprint (e.g. CAD model of a ship, BIM of a building) is available, accurate localization is achieved by aligning the local reconstruction to the blueprint. Inspection results and comparison to model can be shown on Augmented Reality (AR) glasses.” [0032] “FIG. 7 depicts a flow diagram of a computer aided inspection method 700 for inspection, error analysis and comparison of structures in accordance with a general embodiment of the present principles. The method 700 starts at 702 and proceeds to 704 where a 3D computer model of a structure is received from a first sensor package. At 706, a 3D point cloud representation of the structure is generated using fine level measurements and information about the structure obtained from high-resolution sensors configured to obtain mm level measurements. The method proceeds to 708 where objects of interest and detected in the 3d point cloud representation of the structure, and at 710, measurements and information of said objects of interest are obtained using the high-resolution sensors. At 712, the 3d point cloud representation of the structure is aligned to the 3D computer model received. The method proceeds to 714 where discrepancies are detected between the objects of interest and the 3d computer model received as described above with respect to 612 of FIG. 6. In some embodiments, detecting the discrepancies further includes generating a compliance report including the discrepancies determined. … the systems described herein integrate two key capabilities, a GLS 102 to perform global localization and alignment, and an LMS 114 to perform fine measurements and error detection. The first capability enables the users to walk around a large worksite and locate themselves within the site at an accuracy of about 10 centimeters and overlay AR icons on the wearable display. Doing localization to 10 cm precision, will also enable the system to automatically match the model to the high-precision tablet video without user intervention. The second capability enables the users to make high-precision and high-accuracy measurements (millimeter level).” [0046-0047] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806. The first step is the automatic detection of the rails based on their known 3D profile. Next, several measurements are performed to determine regions that are not compliant and included in a compliance report 810. As an example, for gage (distance between tracks) measurements, the following steps are performed: align the local point cloud so that the main track direction is aligned with the Y axis; divide the rail points in chunks along the track (e.g. one foot long) and fit a plane through the classified rail points; use the point distribution along the X axis to fit the local tangent to each rail and measure distance between the center point of each segment; and repeat for every section along the track to generate a list of measurements. … the system would globally localize the user in the ship using the GLS 102 by acquiring and tracking image features using a first sensor package 104 and matching visual landmarks, tags, or location coordinates to a pre-built map of the ship. Using information acquired by the first sensor package 104, the GLS 102 is able to track a user's location across a ship to within 5-15 cm, or about 10 cm, in a global coordinate systems at 106 in FIG. 1. The system would then index in the appropriate portion of a model of the ship based on the user's location, and assist the user in repair, maintenance by overlaying instructions and providing audio cues on what to do next. For example, once the sensor package is localized in the CAD model reference frame, any component from the CAD model can be presented as an Augmented Reality overlay in the display (tablet or optically see through HMD). This functionality enables quick visual inspection of constructed elements, e.g. verifying that the location of air ducts, pipes, beams, etc. matches the model/plan, as well as visualizing the location of elements not yet constructed.” [0051-0052] “computer system 900 includes one or more processors 910a-910n coupled to a system memory 920 via an input/output (I/O) interface 930. Computer system 900 further includes a network interface 940 coupled to I/O interface 930, and one or more input/output devices 950, such as cursor control device 960, keyboard 970, and display(s) 980. In various embodiments, any of the components may be utilized by the system to receive user input described above.” [0057]) 17. With reference to claim 10, Salgian teaches second set of sensors comprise one or more wide-angled camera devices. (“the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0032] “data collection (802) by the fine level sensor package 116 for this use case may consist of a sensor head and 12-inch tablet PC that runs the data collection application and provides the user interface through its touch screen display. The sensors will be powered from a battery that can be carried in a backpack or on a belt. The sensor head has a stereo pair of high-definition cameras (e.g., 1920×1200 pixels) with a 25-cm baseline and 50 degrees horizontal Field of View lenses as well as a GPS/IMU unit.” [0050]) 18. With reference to claim 11, Salgian teaches the second set of sensors is configured to capture data over a 360-degree field of view. (“the pose captured by the first or second sensor packages may be a six (6) degrees of freedom (6DOF) pose. This is achieved by sending a number of salient features (image feature descriptors and the corresponding 3D points) from the first sensor package 104 to the second sensor package 116 (left diagram in FIG. 5). The tablet sub-system 116 performs a 3D-2D matching (right diagram in FIG. 5) based on the features received (506 and 510) and the matched image features in the tablet image (508) to compute the 6DOF pose transformation (rotation and translation) between the tablet camera and the helmet camera. This transformation is then used to align the pose of the second sensor package with the pose of the first sensor package. … the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0031-0032]) 19. With reference to claim 12, Salgian does not explicitly teach the second set of sensors is mounted at the top of the article of headwear. This is what Johnson teaches (“The head mountable display system 200 may also include a headset pose sensor system 207 used to determine the orientation and position or pose of the head of the operator. For example, the headset pose sensor system 207 may include a plurality of headset pose sensors 208 that generate signals that the controller 21 may use to determine the pose of the operator's head. … A mapping system 210 may be positioned on head mountable display device 201 to scan the area in proximity to the operator (i.e., within the operator cab 15) to determine the extent to which the operator's view is obstructed and to assist in positioning images generated by the head mountable display system 200. The mapping system 210 may also be combined with the pose sensing system 24 of machine 10 to operate as an alternate headset pose sensor system 207. The mapping system 210 may include one or more mapping sensors 211 that may scan the area adjacent the operator to gather information defining the interior of the operator cab.” [0037-0038]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Johnson into Salgian, in order to provide a hand-free solution and increase the visibility around the machine. 20. Claim 13 is similar in scope to claim 1, and thus is rejected under similar rationale. 21. With reference to claim 14, Salgian teaches the second set of sensors form part of a simultaneous mapping and localisation (SLAM) system. (“the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area. The 3D model 214 is provided as input to the Visualization and Measurement tool 806.” [0051]) 22. With reference to claim 15, Salgian teaches device comprises a sensor to measure a depth of one or more locations. (“the second sensor package 116 used for local fine level relative measurements at a mm level of precision includes a handheld device, such as a tablet 402, that may include one or more cameras, a trinocular camera 406a-c, a high-resolution range sensor, IMU 410, GPS, visualization capabilities, illuminator for lighting 408, etc., as shown in FIG. 4.” [0032] “The 3D recovery module 804 generates a dense 3D point cloud from a sequence of images with multiple views of the same area on the ground obtained by the sensor package 116. The 3D recovery module 804 first runs Visual Navigation on the input video sequence to obtain initial pose estimates for each image. Based on these poses a subset of images (key-frames) is selected for 3D recovery. Next, feature tracks are generated across multiple frames and then Bundle Adjustment is run to refine the camera poses, and 3D point locations corresponding to each feature track. The Bundle Adjustment step uses the fact that the relative motion between the left and right camera is constant over time and it is known from calibration to fix the scale of the 3D reconstruction. Finally, the 3D point clouds from each stereo pair are aggregated to generate a dense point cloud for the inspected area.” [0051]) 23. Claim 16 is similar in scope to claim 2, and thus is rejected under similar rationale. 24. Claim 17 is similar in scope to the combination of claims 4 and 6, and thus is rejected under similar rationale. 25. Claim 18 is similar in scope to claim 7, and thus is rejected under similar rationale. 26. Claim 19 is similar in scope to claim 9, and thus is rejected under similar rationale. 27. Claim 20 is similar in scope to claim 1, and thus is rejected under similar rationale. Salgian additionally teaches A non-transitory computer-readable storage medium storing instructions which, when executed by at least one processor, cause the at least one processor to (“Embodiments in accordance with the disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a “virtual machine” running on one or more computing devices). For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory.” [0070]) Conclusion 28. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE CHIN/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602870
COMPUTER-AIDED TECHNIQUES FOR DESIGNING 3D SURFACES BASED ON GRADIENT SPECIFICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597205
HYBRID GPU-CPU APPROACH FOR MESH GENERATION AND ADAPTIVE MESH REFINEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592041
MIXED SHEET EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12586287
Method of Operating Shared GPU Resource and a Shared GPU Device
2y 5m to grant Granted Mar 24, 2026
Patent 12579700
METHODS OF IMPERSONATION IN STREAMING MEDIA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
97%
With Interview (+11.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 634 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month