DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s response to the Non-Final Office Action dated 10/21/2025, filed with the office on 01/20/2026, has been entered and made of record.
Status of Claims
Claims 1-20 are pending. Claims 21-33 are cancelled.
Response to Amendments
In light of Applicant’s amendments, the objections of record with to the drawings has been withdrawn.
In light of Applicant’s amendments, the objects of record with respect to claim 14 is withdrawn.
In light of the Applicant’s amendments of claim 5, the 112(b) rejections of record for insufficient antecedent basis has been withdrawn.
Response to Arguments
Applicant’s amendments of independent claims 1, 15, and 20 which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot.
Consequently, THIS ACTION IS MADE FINAL.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-8, 11-12, 14-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cummings et al. (US 2016/0292918 A1) in view of xinreality (“Lighthouse From Virtual Reality and Augmented Reality Wiki” - 2022), in further view of Gao et al. (“Easy Calibration of a Head-Mounted Projective Display for Augmented Reality Systems” – From IDS dated 09/27/2023).
Regarding claim 15, Cummings teaches “A headset for use in construction at a construction site, the headset comprising: an article of head wear (Cummings Figure 3A and paragraph [0054] "Headset 202 is further connected to display unit 213 and a set of cameras 214. Headset 202 includes processor 215, a set of sensors 216 connected to processor 215, and memory 217 connected to processor 215");
PNG
media_image1.png
635
855
media_image1.png
Greyscale
Cummings Figure 3A
one or more cameras (Cummings paragraph [0054] "Headset 202 is further connected to display unit 213 and a set of cameras 214");
a head-mounted display for displaying a virtual image of a building information model (BIM) (Cummings paragraph [0066] "Display units 402 and 409 provide a stereoscopic augmented view to user 401"); and
an electronic control system comprising at least one processor (Cummings paragraph [0055] "Wearable computer 313 is preferably a portable computing device, such as a laptop or tablet computer, worn as a backpack by user 301. Connection 314 provides a data and power connection from wearable computer 313 to headset 303. Headset 303 includes processor 310"), wherein the electronic control system is configured to:
obtain an image of a two-dimensional marker positioned within the construction site from the camera (Cummings paragraph [0093] "The camera captures an image of at least one of registration markers 805, 806, 807, and 811"), the two-dimensional marker being positioned with respect to defined positions within a coordinate system of the building information model (Cummings paragraph [0093] "Each of the positions of registration markers 805, 806, and 807 is associated with a position in a BIM. Survey location 810 is precisely positioned at a known location at construction site 800 and saved in the BIM");
determine (Cummings paragraph [0093] "A wearable computer of user device 808 decodes the captured image to determine a real location of at least one of registration markers 805, 806, 807, and 811. The wearable computer determines a corresponding virtual location in the BIM");
wherein the (Cummings paragraph [0094] "user 809 is standing in construction site 800 wearing user device 808 and looking down at location 812 where object 813 is to be installed. Registration marker 805 is in view of user device 808. The projected BIM shows the correct installation position 814 in view of user 809 as if the user were standing inside the BIM").”
However Cummings is not relied on to teach “a set of sensor devices for a positioning system, the set of sensor devices operating to track a position and orientation of the headset at the construction site over time with respect to an origin of a coordinate system used by the positioning system”, “determine a pose of the headset using the positioning system, the positioning system determining a pose of the headset independently of the two-dimensional marker”, “determine a BIM-to-camera transformation”, “determine a camera-to-positioning transformation between the origin of the coordinate system used by the camera and an origin of the coordinate system used by the positioning system based on a specification of the spatial relationship between the headset and the camera; and determine a BIM-to-positioning transformation between the coordinate system used by the positioning system and the coordinate system used by the building information model based on the BIM-to-camera transformation and the camera-to-positioning transformation” and “BIM-to-positioning transformation”.
In an analogous field of endeavor, xinreality teaches “a set of sensor devices for a positioning system, the set of sensor devices operating to track a position and orientation of the headset at the construction site over time with respect to an origin of a coordinate system used by the positioning system (xinreality paragraph 2 "These Base Stations are small rectangular objects placed in the tracking area. They serve as reference points for any positionally tracked devices such as the HMDs and controllers. Base Stations perform this function by constantly flooding the room with a non-visible light. The receptors on the tracked devices would intercept the light and figure out where they are in relation to the Base Stations. Multiple Base Stations (2 for Steam VR) allow the tracked devices to figure out where they are in the 3D space")” and “determine a pose of the headset using the positioning system, the positioning system determining a pose of the headset independently of the two-dimensional marker (xinreality paragraph 1 ""Lighthouse is a laser-based inside-out positional tracker system […] It accurately tracks the position and orientation of the user's head-mounted Display and controllers in real time")”.
It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine a augmented reality capable construction helmet for comparing BIM to an actual construction site as taught by Cummings with the use of base stations for determining position of the headset as taught by xinreality.
The suggestion/motivation for doing so would have been the need in the field of endeavor of augmented reality to accurately determine position information “Lighthouse calculates When the photosensor is hit by the laser and Where that photosensor is located to find the exact position of the receptor in relation to the Base Station. When
there are 2 Base Stations, the position and the orientation of the receptors in the 3D space of the room is established" as noted by the xinreality disclosure page 2 paragraph 1.
However, the combination of Cummings and xinreality is not relied on to teach “determine a BIM-to-camera transformation”, “determine a camera-to-positioning transformation between the origin of the coordinate system used by the camera and an origin of the coordinate system used by the positioning system based on a specification of the spatial relationship between the headset and the camera; and determine a BIM-to-positioning transformation between the coordinate system used by the positioning system and the coordinate system used by the building information model based on the BIM-to-camera transformation and the camera-to-positioning transformation” and “BIM-to-positioning transformation”.
Gao teaches “determine a BIM-to-camera transformation between an origin of a coordinate system used by the building information model and an origin of the coordinate system used by the camera (Gao Figure 2 and page 3 left hand column paragraph 1 "virtual environment, we define a virtual world coordinates (VWC),
W
v
X
Y
Z
. Two virtual cameras are properly placed in the VWC to generate the 2D projections of a 3D world. Given a 3D point
P
w
v
(
X
w
v
,
Y
w
v
,
Z
w
v
,
1
)
in the VWC, its 2D projection
P
I
v
(
X
I
v
,
Y
I
v
,
Z
I
v
)
on the viewing plane of a virtual camera is given by
P
I
v
=
M
C
v
T
c
v
w
v
P
w
v
”) in the VWC, its 2D projection
P
I
v
(
X
I
v
,
Y
I
v
,
Z
I
v
)
on the viewing plane of a virtual camera is given by
P
I
v
=
M
C
v
T
c
v
w
v
P
w
v
based on a location of the two-dimensional marker within the image;
determine a camera-to-positioning transformation between the origin of the coordinate system used by the camera and an origin of the coordinate system used by the positioning system based on a specification of the spatial relationship between the headset and the camera (Gao Figure 2 and page 3 left hand column paragraph 2 "The corresponding viewport coordinates ( u, v) are
u
=
x
i
v
/
w
i
v
and
v
=
y
i
v
/
w
i
v
, respectively.
T
c
v
w
v
is a rigid transformation that transforms a 3D point from the world reference to the virtual camera reference, and Mc represents the imaging properties of the virtual , camera. "); and
determine a BIM-to-positioning transformation between the coordinate system used by the positioning system and the coordinate system used by the building information model based on the BIM-to-camera transformation and the camera-to-positioning transformation (Gao Figure 2 and page 3 left hand column paragraph 3 "Through a viewing device such as the HMPD, a user's eye observes the superposition of the projections of both a virtual object and its physical counterpart. Given a 3D point
P
w
p
X
w
p
,
Y
w
p
,
Z
w
p
,
1
in the PWC, its 2D projection
P
I
p
(
X
I
p
,
Y
I
p
,
Z
I
p
)
on the display window of the viewing system is given by
P
I
p
=
M
E
T
E
w
p
P
w
p
) where
M
E
T
E
w
p
is a rigid transformation that transforms a 3D point from the world reference to the eye reference EXYZ, and represents ME the imaging properties of the viewing device"), wherein the BIM-to-positioning transformation is used to render a virtual image of the building information model relative to the pose of the headset (Gao Figure 2 and page 3 left hand column paragraph 4 "To superimpose the virtual environment precisely on the real environment, the virtual cameras should be positioned and orientated in the same way as the user's eyes in the real world, and their imaging parameters should match with those of the viewing device, given the assumption that the virtual world reference is well aligned with the real world reference") on the head-mounted display.”
It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine a augmented reality capable construction helmet for comparing BIM to an actual construction site as taught by Cummings and xinreality to use transformations to calibrate information being displayed via a headset into the real world as taught by Gao.
The suggestion/motivation for doing so would have been the need in the field of endeavor of augmented reality to accurately determine position information “The focus of this paper is to present a fast and easy off-line calibration method to improve static registration in a custom-designed augmented reality system, which is based upon head-mounted projective display (HMPD) technology [8, 11, 19]. With minor modifications, the proposed method can be adapted for other optical see-through HMDs" as noted by the Gao disclosure page 1 right hand column paragraph 2.
Therefore, it would have been obvious to combine the disclosure of Cummings and xinreality with the Gao disclosure to obtain the invention as specified in claim 5 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 1 recites a method with steps corresponding to the device elements
recited in claim 15. Therefore, the recited steps of this claim are mapped to the
proposed combination in the same manner as the corresponding elements of device
claim 15. Additionally, the rationale and motivation to combine the Cummings, xinreality, and Gao references, presented in rejection of claim 15 apply to this claim.
Regarding claim 2, the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the two-dimensional marker is positioned in relation to a set of control markers (Cummings paragraph [0093] "Reference marker 811 is a master reference point based on the location of the survey location 810"), the set of control markers having defined coordinates within the coordinate system of the building information model (Cummings paragraph [0093] "Each of the positions of registration markers 805, 806, and 807 is associated with a position in a BIM. Survey location 810 is precisely positioned at a known location at construction site 800 and saved in the BIM").”
Regarding claim 3, the combination of Cummings, xinreality, and Gao teaches “The method of claim 2, comprising: positioning the two-dimensional marker upon a planar surface at the construction site (Cummings paragraph [0093] "Each of registration markers 805, 806, and 807 is positioned from reference marker 811 to ensure proper location of floor 801 and walls 802 and 803");
positioning at least three control markers in relation to the two-dimensional marker (Cummings Figure 8 and paragraph [0092] "Registration system 804 includes registration markers 805, 806, and 807 positioned at precise locations on floor 801, wall 802 and wall 803, respectively and serve as a set of reference points for user device 808 worn by user 809");
PNG
media_image2.png
607
793
media_image2.png
Greyscale
Cummings Figure 8
measuring locations of the at least three control markers using a surveying device (Cummings paragraph [0093] "Reference marker 811 is a master reference point based on the location of the survey location 810"); and
storing the measured locations of the at least three control markers with reference to the coordinate system of the building information model, wherein the spatial relationship between the at least three control markers and the two-dimensional marker is defined (Cummings paragraph [0093] "Each of the positions of registration markers 805, 806, and 807 is associated with a position in a BIM. Survey location 810 is precisely positioned at a known location at construction site 800 and saved in the BIM").”
Regarding claim 5, the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the origins origin of the coordination system used by the building information model, the origin of the coordinate system of the camera, and the origin of the coordinate system used by the positioning system are defined with six-degrees of freedom (Cummings paragraph [0125] "the set of motion detection data and the position of the user device are combined to determine an x, y, z position of the user device in reality and in the BIM and a roll, pitch, and yaw or detection of the user device in reality and the BIM") and the transformations BIM-to-camera transformation, the camera- to-positioning transformation, and the BIM-to-positioning transformation are defined as matrix transformations with rotation and translation terms (Gao page 3 right hand column paragraph 2 "The key of the HMPD calibration is to estimate the viewing projection matrix that is used to specify ME the imaging properties of a virtual camera and the eye transformation
T
E
s
that is used to define the position and orientation of a virtual camera with respect to the sensor reference of the head tracker").”
The proposed combination as well as the motivation for combining Cummings, xinreality, and Gao references presented in the rejection of claim 15, applies to claim 5. Finally the method recited in claim 5 is met by Cummings, xinreality, and Gao.
Regarding claim 6 (similarly claim 18), the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the method is repeated during movement of the headset at the construction site to obtain a plurality of estimates for the BIM-to-positioning transformation (Cummings paragraph [0125] "the set of motion detection data and the position of the user device are combined to determine an x, y, z position of the user device in reality and in the BIM and a roll, pitch, and yaw or detection of the user device in reality and the BIM"), and wherein the method further comprises:
optimising the plurality of estimates for the BIM-to-positioning transformation to determine an optimised BIM-to-positioning transformation (Gao Figure 2 and page 3 left hand column paragraph 4 "To superimpose the virtual environment precisely on the real environment, the virtual cameras should be positioned and orientated in the same way as the user's eyes in the real world, and their imaging parameters should match with those of the viewing device, given the assumption that the virtual world reference is well aligned with the real world reference") for use in displaying the building information model (Cummings paragraph [0126] "Referring to FIG. 17, step 1505 will be further described as method 1700 for rendering a stereoscopic overlay according to the position and the orientation of the user device for a user device. Method 1700 begins at step 1701. At step 1702, a BIM is rotated and magnified based on the position and the orientation of the user device").”
The proposed combination as well as the motivation for combining Cummings, xinreality, and Gao references presented in the rejection of claim 15, applies to claim 6. Finally the method recited in claim 6 is met by Cummings, xinreality, and Gao.
Regarding claim 7, the combination of Cummings, xinreality, and Gao teaches “The method of claim 6, wherein the construction site comprises a plurality of two-dimensional markers positioned at different locations (Cummings Figure 8 and paragraph [0093] "Each of registration markers 805, 806, and 807 is positioned from reference marker 811 to ensure proper location of floor 801 and walls 802 and 803").”
Regarding claim 8, the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the headset forms part of a construction helmet (Cummings paragraph [0055] "Hard hat 302 is worn by user 301"), the set of sensor devices comprise at least one camera (Cummings paragraph [0054] "Headset 202 is further connected to display unit 213 and a set of cameras 214"), and the positioning system (xinreality paragraph 2 "These Base Stations are small rectangular objects placed in the tracking area. They serve as reference points for any positionally tracked devices such as the HMDs and controllers. Base Stations perform this function by constantly flooding the room with a non-visible light. The receptors on the tracked devices would intercept the light and figure out where they are in relation to the Base Stations. Multiple Base Stations (2 for Steam VR) allow the tracked devices to figure out where they are in the 3D space") comprises a simultaneous localisation and mapping system (Cummings paragraph [0125] "the set of motion detection data and the position of the user device are combined to determine an x, y, z position of the user device in reality and in the BIM and a roll, pitch, and yaw or detection of the user device in reality and the BIM").”
The proposed combination as well as the motivation for combining Cummings, xinreality, and Gao references presented in the rejection of claim 15, applies to claim 8. Finally the method recited in claim 8 is met by Cummings, xinreality, and Gao.
Regarding claim 11, the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein determining a pose of the headset using a positioning system comprises determining a set of poses for the headset using a plurality of positioning systems (Cummings paragraph [0060] "As the user moves, headset 303 and wearable computer 313 tracks the location of user 301 and the position and orientation of the user's head using camera 326 and/or camera matrix 318"), each positioning system having a different coordinate system (xinreality paragraph 2 "These Base Stations are small rectangular objects placed in the tracking area. They serve as reference points for any positionally tracked devices such as the HMDs and controllers. Base Stations perform this function by constantly flooding the room with a non-visible light. The receptors on the tracked devices would intercept the light and figure out where they are in relation to the Base Stations. Multiple Base Stations (2 for Steam VR) allow the tracked devices to figure out where they are in the 3D space") and origin within said coordinate system (Cummings paragraph [0080] "camera matrix 318 provides a 3600 view of the surroundings of a user. In other embodiments, other numbers of cameras, angular positions, and field of view ranges may be employed to provide a 360° view");
wherein determining a camera-to-positioning transformation (Gao Figure 2 and page 3 left hand column paragraph 2 "The corresponding viewport coordinates ( u, v) are
u
=
x
i
v
/
w
i
v
and
v
=
y
i
v
/
w
i
v
, respectively.
T
c
v
w
v
is a rigid transformation that transforms a 3D point from the world reference to the virtual camera reference, and Mc represents the imaging properties of the virtual , camera") comprises determining a camera-to-positioning transformation for each positioning system (Cummings paragraph [0099] "a set of cameras captures a set of registration images. The set of marker images is used to determine the position of the user. In another example, images 1004 are still or video images captured by a set of cameras adjacent to the eyes of the user and saved to memory for later upload or streamed to a server. Point of view image 1005 is captured by the set of headset cameras adjacent to the eyes of a user); and
wherein determining a BIM-to-positioning transformation comprises determining at least one transformation to map the coordinate system used by each positioning system to the coordinate system used by the building information model (Gao Figure 2 and page 3 left hand column paragraph 3 "Through a viewing device such as the HMPD, a user's eye observes the superposition of the projections of both a virtual object and its physical counterpart. Given a 3D point
P
w
p
X
w
p
,
Y
w
p
,
Z
w
p
,
1
in the PWC, its 2D projection
P
I
p
(
X
I
p
,
Y
I
p
,
Z
I
p
)
on the display window of the viewing system is given by
P
I
p
=
M
E
T
E
w
p
P
w
p
) where
M
E
T
E
w
p
is a rigid transformation that transforms a 3D point from the world reference to the eye reference EXYZ, and represents ME the imaging properties of the viewing device").”
The proposed combination as well as the motivation for combining Cummings, xinreality, and Gao references presented in the rejection of claim 15, applies to claim 11. Finally the method recited in claim 11 is met by Cummings, xinreality, and Gao.
Regarding claim 12, combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the two-dimensional marker is attached to a planar surface of the construction site that is located within the building information model (Cummings Figure 8 and paragraph [0093] "Each of registration markers 805, 806, and 807 is positioned from reference marker 811 to ensure proper location of floor 801 and walls 802 and 803").”
Regarding claim 14, combination of Cummings, xinreality, and Gao teaches “The method of claim 1, comprising: extracting coded information (Cummings paragraph [0115] "At step 1405 the code is read to determine the set of dimensions of the shape of the registration marker, including an actual height and an actual width") from the two-dimensional marker (Cummings paragraph [0096] "Referring to FIGS. 9A and 9B, registration marker 901 includes shape 903 and code 904"); and
using the extracted coded information to determine the defined positions within the coordinate system of the building information model (Cummings paragraph [0097] "each of codes 904 and 906 includes a set of marker information, including a set of dimensions of shapes 903 and 905, and a set of x, y, z coordinates position at which registration markers 901 and 902 are placed, and a description of each shape and location").”
Regarding claim 19, combination of Cummings, xinreality, and Gao teaches “The headset of claim 15, wherein the positioning system (xinreality paragraph 2 "These Base Stations are small rectangular objects placed in the tracking area. They serve as reference points for any positionally tracked devices such as the HMDs and controllers. Base Stations perform this function by constantly flooding the room with a non-visible light. The receptors on the tracked devices would intercept the light and figure out where they are in relation to the Base Stations. Multiple Base Stations (2 for Steam VR) allow the tracked devices to figure out where they are in the 3D space") forms part of one or more positioning systems coupled to the headset (Cummings paragraph [0125] "At step 1603, a set of motion detection data is received from a set of sensors in the user device to determine movement of the user device), and wherein the electronic control system is further configured to determine a BIM-to-positioning transformation for each of the one or more positioning systems (Cummings paragraph [0126] "Referring to FIG. 17, step 1505 will be further described as method 1700 for rendering a stereoscopic overlay according to the position and the orientation of the user device for a user device. Method 1700 begins at step 1701. At step 1702, a BIM is rotated and magnified based on the position and the orientation of the user device").”
The proposed combination as well as the motivation for combining Cummings, xinreality, and Gao references presented in the rejection of claim 15, applies to claim 19. Finally the device recited in claim 19 is met by Cummings, xinreality, and Gao.
Claim 20 recites a computer readable medium including computer executable instructions corresponding to the elements of the device recited in claim 15. Therefore, the recited instructions of the computer readable medium of claim 20 are mapped to the proposed combination in the same manner as the corresponding elements of the device claim 15. Additionally, the rationale and motivation to combine Cummings, xinreality, and Gao presented in rejection of claim 15, apply to this claim.
Claims 4 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Cummings, xinreality, and Gao in view of Swaminathan et al. (EP 2509028 A1).
Regarding claim 4, the combination of Cummings, xinreality, and Gao teaches the method of claim 1. However, the combination of Cummings, xinreality, and Gao does not teach “detecting a set of corners for the two-dimensional marker the set of corners being defined by data indicating a set of two-dimensional corner coordinates with respect to the image and mapping at least three of the set of two-dimensional corner coordinates to corresponding locations of the corners as defined in the coordinate system of the building information model to determine the BIM-to-camera transformation.”
Swaminathan teaches “detecting a set of corners for the two-dimensional marker (Swaminathan paragraph [0048] "Thus, a simple linear scan along every edge of the bounding box reveals the corresponding marker corner trivially as shown in Fig. 6"), the set of corners being defined by data indicating a set of two-dimensional corner coordinates with respect to the image (Swaminathan paragraph [0048] "the image containing the detected marker is as shown in Fig. 6 (a), where every corner of the marker essentially defines the bounding box"); and
mapping at least three of the set of two-dimensional corner coordinates to corresponding locations of the corners as defined in the coordinate system of the building information model to determine the BIM-to-camera transformation (Swaminathan paragraph [0079] "after marker detection using Euler numbers and our robust edge based corner detection can we also determine an exact correspondence between precise image corners of the marker and the world coordinates of the marker").”
It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine a augmented reality capable construction helmet for comparing BIM to an actual construction site as taught by the Cummings, xinreality, and Gao to use corner marker detection as taught by Swaminathan.
The suggestion/motivation for doing so would have been that "In order to perform accurate AR [Augmented Reality], one needs to extract the corners of the marker precisely in order to estimate the position and orientation of the camera" as noted by the Swaminathan disclosure paragraph 47.
Therefore, it would have been obvious to combine the disclosure of the Cummings, xinreality, and Gao with the Swaminathan disclosure to obtain the invention as specified in claim 16 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Regarding claim 16, the combination of Cummings, xinreality, Gao, and Swaminathan teaches “The headset of claim 15, wherein positions of a set of corners of the two-dimensional marker (Swaminathan paragraph [0079] "after marker detection using Euler numbers and our robust edge based corner detection can we also determine an exact correspondence between precise image corners of the marker and the world coordinates of the marker") are defined within the building information model and the two-dimensional marker is attached to a structure at the construction site that is defined within the building information model (Cummings paragraph [0093] "Each of the positions of registration markers 805, 806, and 807 is associated with a position in a BIM. Survey location 810 is precisely positioned at a known location at construction site 800 and saved in the BIM").”
The proposed combination as well as the motivation for combining the Cummings, xinreality, Gao, and Swaminathan references presented in the rejection of claim 4, applies to claim 16. Finally the product recited in claim 16 is met by the Cummings, xinreality, Gao, and Swaminathan.
Regarding claim 17, the combination of Cummings, xinreality, Gao, and Swaminathan teaches “The headset of claim 16, wherein the two-dimensional marker is positioned within the construction site in relation to a plurality of control markers (Cummings paragraph [0093] "Reference marker 811 is a master reference point based on the location of the survey location 810"), wherein the measured coordinates of the plurality of control markers are used to determine the defined positions within the coordinate system of the building information model (Cummings paragraph [0093] "Each of the positions of registration markers 805, 806, and 807 is associated with a position in a BIM. Survey location 810 is precisely positioned at a known location at construction site 800 and saved in the BIM").”
The proposed combination as well as the motivation for combining Cummings, xinreality, Gao, and Swaminathan references presented in the rejection of claim 4, applies to claim 17. Finally the product recited in claim 17 is met by Cummings, xinreality, Gao, and Swaminathan.
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Cummings, xinreality, and Gao in view of Bare et al. (US 2019/0096138 A1).
Regarding claim 9, the combination of Cummings, xinreality, and Gao teaches “The method of claim 1, wherein the headset forms part of a construction helmet (Cummings paragraph [0054] and [0055] "Headset 202 is further connected to display unit 213 and a set of cameras 214. Headset 202 includes processor 215, a set of sensors 216 connected to processor 215, and memory 217 connected to processor 215 […] Hard hat 302 is worn by user 301") and the set of sensor devices comprise a plurality of markers mounted on an external surface of the construction helmet (Cummings Figure 3A and paragraph [0057] "Camera matrix 318 includes halo 319 and halo 321, each of which is detachably connected to headset 303. A set of base cameras 320 is connected to halo 319 and in communication with headset 303. A set of angled cameras 322 is connected to halo 321 and in communication with headset 303"),
wherein positioning system comprises a set of cameras (Cummings paragraph [0057] "Camera matrix 318 includes halo 319 and halo 321, each of which is detachably connected to headset 303. A set of base cameras 320 is connected to halo 319 and in communication with headset 303. A set of angled cameras 322 is connected to halo 321 and in communication with headset 303")”.
However, the combination of Cummings, xinreality, and Gao is not relied on to teach “to detect electromagnetic radiation from the plurality of markers to track the headset within a tracked volume.”
Bare teaches “detect electromagnetic radiation from the plurality of markers to track the headset within a tracked volume (Bare paragraph [0039] "the local positioning system 109 may use electromagnetic or sound waves emanating from various points within the physical spaces in the structural environment").”
It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine a augmented reality capable construction helmet for comparing BIM to an actual construction site as taught by Cummings, xinreality, and Gao to use electromagnetic radiation positioning system as taught by Bare.
The suggestion/motivation for doing so would have been that for “real-time display of AR information on a mobile device immersed in and movable within a dynamic environment. The challenges presented by this scenario include determination of the location of and orientation of the mobile device within the environment, recognition of variations in the spatial geometry of the environment, and detection/identification of changes in other measurable parameters associated with the environment or objects within the environment. The scenario also presents the challenge of differentiating actual object or structure discrepancies from apparent changes resulting from spatial changes in the dynamic environment" as noted by the Bare disclosure paragraph 21.
Therefore, it would have been obvious to combine the disclosure of Cummings, xinreality, and Gao with the Bare disclosure to obtain the invention as specified in claim 9 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Regarding claim 10 the combination of Cummings, xinreality, Gao, and Bare teaches “The method of claim 1, wherein the headset forms part of a construction helmet (Cummings paragraph [0054] and [0055] "Headset 202 is further connected to display unit 213 and a set of cameras 214. Headset 202 includes processor 215, a set of sensors 216 connected to processor 215, and memory 217 connected to processor 215 […] Hard hat 302 is worn by user 301") and the set of sensor devices comprise a plurality of sensor devices mounted on an external surface of the construction helmet (Cummings Figure 3A and paragraph [0057] "Camera matrix 318 includes halo 319 and halo 321, each of which is detachably connected to headset 303. A set of base cameras 320 is connected to halo 319 and in communication with headset 303. A set of angled cameras 322 is connected to halo 321 and in communication with headset 303"),
wherein the positioning system comprises a set of external beacon devices (xinrealtiy page 2 paragraph 2 “Lighthouse calculates When the photosensor is hit by the laser and Where that photosensor is located to find the exact position of the receptor in relation to the Base Station. When there are 2 Base Stations, the position and the orientation of the receptors in the 3D space of the room is established”) that emit one or more beams of electromagnetic radiation that are detected by the plurality of sensor devices to track the headset within a tracked volume (Bare paragraph [0039] "the local positioning system 109 may use electromagnetic or sound waves emanating from various points within the physical spaces in the structural environment. Examples of electromagnetic or sound waves include radio frequency identification (RFID) signals, radio signals, WiFi signals, audio tones, and/or sound waves").”
The proposed combination as well as the motivation for combining Cummings, xinreality, Gao, and Bare references presented in the rejection of claim 9, applies to claim 10. Finally the method recited in claim 10 is met by Cummings, xinreality, Gao, and Bare.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Cummings, xinreality, and Gao in view of Varghese et al. (“Dual Quaternion based IMU and Vision Fusion Framework for Mobile Augmented Reality – Published 2015).
Regarding claim 13, the combination of Cummings, xinreality, and Gao teaches the method of claim 1. However, the combination of Cummings, xinreality, and Gao does not teach “wherein positions and orientations are defined using dual quaternion coordinates.”
Varghese teaches “wherein positions and orientations are defined using dual quaternion coordinates (Varghese page 2 right hand column paragraph 1 "The motion dynamics of the device is modelled in terms of dual quaternion to estimate the pose").”
It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine a augmented reality capable construction helmet for comparing BIM to an actual construction site as taught by Cummings, xinreality, and Gao to use dual quaternion for defining the pose as taught by Varghese.
The suggestion/motivation for doing so would have been that “"Dual quaternion has the advantage of compactness, non-singularity and computation efficiency over conventional approaches like transformation matrix" as noted by the Varghese disclosure page 2 right hand column paragraph 2.
Therefore, it would have been obvious to combine the disclosure of Cummings, xinreality, and Gao with the Varghese disclosure to obtain the invention as specified in claim 13 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 9:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASPREET KAUR/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662