DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 43 is objected to because of the following informalities:
PNG
media_image1.png
50
422
media_image1.png
Greyscale
…. the limitation “consisting of: a flow sensor, an air flow sensor, a temperature sensor, a smoke sensor, a motion sensor, a presence sensor,”, the examiner suggests to change to “consisting of: a flow sensor, an air flow sensor, a temperature sensor, a smoke sensor, a motion sensor, and a presence sensor;”, instead.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 43-48 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AlA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AlA 35 U.S.C. 112, the applicant), regards as the invention.
Referring to claim 43, recites the limitation on page 4:
PNG
media_image2.png
50
573
media_image2.png
Greyscale
…. the limitation “a communication unit (103)…” , is not clear. It is unclear to the examiner if a new communication unit (103) is being introduced or is part of the same of “the integrated computing communication unit (103)”.
There is insufficient antecedent basis for these limitation in the claim.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. - An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AlA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without
the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding
structure, material, or acts described in the specification and equivalents thereof.
Claim 43 limitation “external processing unit” has been interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “unit” coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Claim elements in this application that use the word “unit” are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “unit” are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim(s) 43-48 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
If applicant wishes to provide further explanation or dispute the examiner's interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seg. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Response to Arguments
Applicant's arguments filed 8/27/2025 have been fully considered but they are not persuasive.
According to page 10 of 11, Applicant's argument that cited references fail to teach the new limitation for
PNG
media_image3.png
195
552
media_image3.png
Greyscale
, is not persuasive.
Broaddus teaches matching the stored fused sensor data of a given spatial direction of the user in the integrated computing and communication unit (103) to reference data in a reference data database (20) (Fig. 5; 504) based on the spatial direction of the user ([0017,0099]; compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data…, [0060-0061]; In some example embodiments, an augmented reality (AR) application 508 is stored in the memory 504 or implemented as part of the hardware of the processor 506, and is executable by the processor 506. The AR application 508 provides AR content based on identified objects in a physical environment and a spatial state of the display device 500…, [0071]; At operation 802, the VIN module 112 accesses IMU data from the inertial sensor 104. At operation 804, the VIN module 112 computes a first estimated spatial state of the position and orientation determination device 100 based on the IMU data. In some example embodiments, operation 804 may be implemented using the state estimation module 208. At operation 806, the VIN module 112 accesses video data from the image capture device 102. At operation 808, the VIN module 112 adjusts the first estimated spatial state of the position and orientation determination device 100 based on the video data to generate a second estimated spatial state. In some example embodiments, operation 808 may be implemented using the feature detection module 202 and the feature matching module 204. At operation 810, the VIN module 112 accesses radio-based sensor data (e.g., GPS data, Bluetooth data, WiFi data, UWB data) from the radio-based sensor 106. At operation 812, the VIN module 112 triangulates the location or spatial state of the position and orientation determination device 100 based on the radio-based sensor data. At operation 814, the VIN module 112 updates the second estimated spatial state of the position and orientation determination device 100 based on the triangulated location. In some embodiments, the operation 814 may be implemented using the state estimation module 208 );
using two-dimensional camera images of different spatial directions of the user as the reference data ([0017; The radio-based sensor generates radio-based sensor data based on an absolute reference frame relative to the device. The processor is configured to synchronize the plurality of video frames with the IMU data, compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data..., [0023]; the image capture device 102 comprises a built-in camera or camcorder with which the position and orientation determination device 100 can capture image/video data of visual content in a real-world environment (e.g., a real-world physical object). The image data may comprise one or more still images or video frames…, [0037]; The calibration process consists of observing a known 2D or 3D pattern in the world in all the cameras on the position and orientation determination device 100 and IMU data over several frames…, and [0060-0061]; In some example embodiments, an augmented reality (AR) application 508 is stored in the memory 504 or implemented as part of the hardware of the processor 506, and is executable by the processor 506. The AR application 508 provides AR content based on identified objects in a physical environment and a spatial state of the display device 500.).
However, Broaddus does not explicitly disclose using sonar images.
In an analogous art, Stokes discloses using sonar images (Stokes- [00120]; As
illustrated in Fig. 4, wearable portable imaging device 420 may include one or more
imaging modules 423, which may be implemented as visible spectrum and/or infrared
imaging modules configured to provide monocular (e.g., copied to both displays 426)
and/or stereoscopic image data depending on the number and arrangement of imaging
modules and the type of image processing applied to image data provided by imaging
modules 423. In addition, an OPS (e.g., OPS 230 of Fig. 2B) may be integrated with any of
imaging modules 423, displays 426, and/or frame 440 and be configured to provide a
position and/or orientation of one or more of the features to facilitate determining FOVs
for displays 426. In some embodiments, portable imaging device 420 may be configured
to determine portion 430 of the FOV of display 426 and use an OPS and actuator in an
associated transducer assembly (e.g., actuator 116 coupled to transducer assembly 112
of sonar system 110 in Fig. IB) to ensonify at least a subset of portion 430 substantially
in real time as a user adjusts a position or orientation of wearable portable imaging
device 420 by, for example, moving the user's head. Sonar data provided by the
associated transducer assembly may be rendered using position data and/or orientation
data provided by the OPS to correlate the sonar data with portion 430, for example,
and/or to facilitate other rendering processing described herein.).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to apply the technique of Stokes to the system of Broaddus in order to provide an intuitive, meaningful, and relatively full representation of the environment, particularly in the context of aiding in the navigation of a mobile structure.
Thus, the combination of Broadudus and Stokes teaches the limitation presented above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 49-53 are rejected under 35 U.S.C. 103 as being unpatentable over Broaddus et al. (US 2017/0336220 A1 hereinafter Broaddus) in view of Stokes Paul (WO 2017131838 A2 hereinafter Stokes).
Referring to claim 49, Broaddus discloses a method for creating and displaying a continuous ([0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500.) and real-time augmented reality scene corresponding to the current position and orientation of a user ([0037]; The position and orientation of the position and orientation determination device 100 can be used in an AR system by knowing precisely where the AR system is in real time and with low latency to project a virtual world into a display of the AR system.), comprising:
continuously measuring physical characteristics of the user ([0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500.) and physical characteristics of the user's environment in real time with sensors (101) arranged on a head-worn device (10) ([0019]; a head-mounted device, a helmet, a watch, a visor, and eyeglasses…, [0017], Fig. 1; The present disclosure provides techniques for VIN. The absolute position or relative position of a VIN device in space can be tracked using sensors and a VIN module in the device. VIN is a method of estimating accurate position, velocity, and orientation (also referred to as state information) by combining visual cues with inertial information. In some embodiments, the device comprises an inertial measurement unit (IMU) sensor, a camera, a radio-based sensor, and a processor…, and [0045]; For example, the state estimation module 208 fuses the sensor information to track the full state (e.g., position, orientation, velocity, sensor biases, etc.) of the position and orientation determination device 100.);
processing the measurement data of the sensors (101) (Fig. 1; 104, 106) with a computing and communication unit (103) integrated into the head-worn device (10) ([0022]; FIG. 1 is a block diagram illustrating a position and orientation determination device 100, in accordance with some example embodiments. The position and orientation determination device 100 comprises an image capture device 102 (e.g., camera), an inertial sensor 104 (e.g., gyroscope, accelerometer), a radio-based sensor 106 (e.g., WiFi, GPS, Bluetooth), a processor 108, and a memory 110.);
fusing the processed data of the sensors (101) (Fig. 1; 104, 106) by means of the integrated computing and communication unit (103) ([0026-0027], Fig.1-2; the processor 108 includes a visual inertial navigation (VIN) module 112 (stored in the memory 110 or implemented as part of the hardware of the processor 108, and executable by the processor 108).) to create an augmented reality scene ([0045], Fig. 3; The feature matching module 204 with VIN 112 uses the IMU sensor data (e.g., gyroscope and accelerometer data) to match features between adjacent image frames (e.g., inlier matches). The outlier detection module 206 detects outliers as previously described with respect to FIG. 2. The state estimation module 208 uses the radio-based signal data to perform an extended Kalman filter on the video frames to generate 6DOF pose data. For example, the state estimation module 208 fuses the sensor information to track the full state (e.g., position, orientation, velocity, sensor biases, etc.) of the position and orientation determination device 100…. and ([0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.);
storing the fused sensor and reference image pairs according to spatial direction of the user in a data storage unit (30) ([0065]; The AR content mapping module 608 maps the location of the AR content to be displayed in the display 502 based on the dynamic state (e.g., spatial state of the display device 500). As such, the AR content may be accurately displayed based on a relative position of the display device 500 in space or in a physical environment. When the user moves, the inertial position of the display device 500 is tracked and the display of the AR content is adjusted based on the new inertial position. For example, the user may view a virtual object visually perceived to be on a physical table. The position, location, and display of the virtual object is updated in the display 502 as the user moves around (e.g., away from, closer to, around) the physical table.);
matching the stored fused sensor data of a given spatial direction of the user in the integrated computing and communication unit (103) to reference data in a reference data database (20) (Fig. 5; 504) based on the spatial direction of the user ([0017,0099]; compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data…., [0060-0061]; In some example embodiments, an augmented reality (AR) application 508 is stored in the memory 504 or implemented as part of the hardware of the processor 506, and is executable by the processor 506. The AR application 508 provides AR content based on identified objects in a physical environment and a spatial state of the display device 500…, and [0071]; At operation 802, the VIN module 112 accesses IMU data from the inertial sensor 104. At operation 804, the VIN module 112 computes a first estimated spatial state of the position and orientation determination device 100 based on the IMU data. In some example embodiments, operation 804 may be implemented using the state estimation module 208. At operation 806, the VIN module 112 accesses video data from the image capture device 102. At operation 808, the VIN module 112 adjusts the first estimated spatial state of the
position and orientation determination device 100 based on the video data to generate a
second estimated spatial state. In some example embodiments, operation 808 may be
implemented using the feature detection module 202 and the feature matching module
204. At operation 810, the VIN module 112 accesses radio-based sensor data (e.g., GPS
data, Bluetooth data, WiFi data, UWB data) from the radio-based sensor 106. At operation
812, the VIN module 112 triangulates the location or spatial state of the position and
orientation determination device 100 based on the radio-based sensor data. At operation
814, the VIN module 112 updates the second estimated spatial state of the position and
orientation determination device 100 based on the triangulated location. In some
embodiments, the operation 814 may be implemented using the state estimation module
208);
using two-dimensional camera images of different spatial directions of the user as the reference data ([0017; The radio-based sensor generates radio-based sensor data based on an absolute reference frame relative to the device. The processor is configured to synchronize the plurality of video frames with the IMU data, compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data..., [0023]; the image capture device 102 comprises a built-in camera or camcorder with which the position and orientation determination device 100 can capture image/video data of visual content in a real-world environment (e.g., a real-world physical object). The image data may comprise one or more still images or video frames…, [0037]; The calibration process consists of observing a known 2D or 3D pattern in the world in all the cameras on the position and orientation determination device 100 and IMU data over several frames…, and [0060-0061]; In some example embodiments, an augmented reality (AR) application 508 is stored in the memory 504 or implemented as part of the hardware of the processor 506, and is executable by the processor 506. The AR application 508 provides AR content based on identified objects in a physical environment and a spatial state of the display device 500.); and
continuously modifying the generated augmented reality scene so that it corresponds to the current position and a current spatial direction of the user ([0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.);
transmitting the matched fused sensor ([0017,0099]; compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data) and reference image ([0071, Fig. 8; At operation 806, the VIN module 112 accesses video data from the image capture device 102…, and [0017,0023, 0099]; The image data may comprise one or more still images or video frames. Thus, the images or video frames as to pair with the IMU data via synchronization) pairs to the head- word device using the integrated computing and communication unit (103) ([0071]; At operation 812, the VIN module 112 triangulates the location or spatial state of the position and orientation determination device 100 based on the radio-based sensor data. At operation 814, the VIN module 112 updates the second estimated spatial state of the position and orientation determination device 100 based on the triangulated location. In some embodiments, the operation 814 may be implemented using the state estimation module 208… and [0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.);
displaying the further modified augmented reality scene created based on the matched fused sensor and reference image ([0017,0023, 0099]; The image data may comprise one or more still images or video frames. Thus, the images or video frames as to pair with the IMU data via synchronization) pairs on a display of the head-worn device using the integrated computing and communication unit (103) ([0071]; At operation 812, the VIN module 112 triangulates the location or spatial state of the position and orientation determination device 100 based on the radio-based sensor data. At operation 814, the VIN module 112 updates the second estimated spatial state of the position and orientation determination device 100 based on the triangulated location. In some embodiments, the operation 814 may be implemented using the state estimation module 208… and [0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.); and
modifying the augmented reality scene based at least on a change in the spatial direction of the user so that it corresponds to the current position and the spatial direction of the user ([0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.).
However, Broaddus does not explicitly disclose using sonar images.
In an analogous art, Stokes discloses using sonar images (Stokes- [00120]; As illustrated in Fig. 4, wearable portable imaging device 420 may include one or more imaging modules 423, which may be implemented as visible spectrum and/or infrared imaging modules configured to provide monocular (e.g., copied to both displays 426) and/or stereoscopic image data depending on the number and arrangement of imaging modules and the type of image processing applied to image data provided by imaging modules 423. In addition, an OPS (e.g., OPS 230 of Fig. 2B) may be integrated with any of imaging modules 423, displays 426, and/or frame 440 and be configured to provide a position and/or orientation of one or more of the features to facilitate determining FOVs for displays 426. In some embodiments, portable imaging device 420 may be configured to determine portion 430 of the FOV of display 426 and use an OPS and actuator in an associated transducer assembly (e.g., actuator 116 coupled to transducer assembly 112 of sonar system 110 in Fig. IB) to ensonify at least a subset of portion 430 substantially in real time as a user adjusts a position or orientation of wearable portable imaging device 420 by, for example, moving the user's head. Sonar data provided by the associated transducer assembly may be rendered using position data and/or orientation data provided by the OPS to correlate the sonar data with portion 430, for example, and/or to facilitate other rendering processing described herein.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Stokes to the system of Broaddus in order to provide an intuitive, meaningful, and relatively full representation of the environment, particularly in the context of aiding in the navigation of a mobile structure.
Referring to claim 50, Broaddus discloses wherein during the processing of the measurement data of the sensors, the measurement data detected by the sensors (101) is also fused with data ([0017]; IMU sensor data as the “data”) of a further installed, static sensor assembly (40) (i.e., built-in camera) and/or a further installed, movable sensor assembly (50) ([0017]; VIN is a method of estimating accurate position, velocity, and orientation (also referred to as state information) by combining visual cues with inertial information. In some embodiments, the device comprises an inertial measurement unit (IMU) sensor, a camera, a radio-based sensor, and a processor. The IMU sensor generates IMU data of the device. The camera generates a plurality of video frames. The radio-based sensor generates radio-based sensor data based on an absolute reference frame relative to the device. The processor is configured to synchronize the plurality of video frames with the IMU data, compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data. Thus, the limitation “synchronized plurality of video frames with the IMU data” meets the limitation “fused with data of a further installed, static sensor assembly (40)… ”…, and [0045]; FIG. 3 is a block diagram illustrating an operation of the VIN module 112, in accordance with some example embodiments. The feature detection module 202 receives video data (e.g., video frames) from the image capture device 102. As previously described with respect to FIG. 2, the feature detection module 202 detects and tracks features in the video frames. The feature matching module 204 uses the IMU sensor data (e.g., gyroscope and accelerometer data) to match features between adjacent image frames (e.g., inlier matches). The outlier detection module 206 detects outliers as previously described with respect to FIG. 2. The state estimation module 208 uses the radio-based signal data to perform an extended Kalman filter on the video frames to generate 6DOF pose data. For example, the state estimation module 208 fuses the sensor information to track the full state (e.g., position, orientation, velocity, sensor biases, etc.) of the position and orientation determination device 100.).
Referring to claim 51, Broaddus discloses wherein during the processing of the measurement data of the sensors, the measurement data detected by the sensors (101) is also fused with two-dimensional image data detected by a camera (1015) ([0017; The radio-based sensor generates radio-based sensor data based on an absolute reference frame relative to the device. The processor is configured to synchronize the plurality of video frames with the IMU data, compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data..., [0023]; the image capture device 102 comprises a built-in camera or camcorder with which the position and orientation determination device 100 can capture image/video data of visual content in a real-world environment (e.g., a real-world physical object). The image data may comprise one or more still images or video frames…, [0037]; The calibration process consists of observing a known 2D or 3D pattern in the world in all the cameras on the position and orientation determination device 100 and IMU data over several frames…, and [0060-0061]; In some example embodiments, an augmented reality (AR) application 508 is stored in the memory 504 or implemented as part of the hardware of the processor 506, and is executable by the processor 506. The AR application 508 provides AR content based on identified objects in a physical environment and a spatial state of the display device 500.).
Referring to claim 52, Broaddus discloses further comprising combining a given spatially directional, fused sensor data with different types of the reference data from a plurality of reference data databases (20) via the integrated computing and communication unit (103) and matching with the reference data in the reference data database (20) based on the spatial direction of the user (Broaddus- [0071]; At operation 802, the VIN module 112 accesses IMU data from the inertial sensor 104. At operation 804, the VIN module 112 computes a first estimated spatial state of the position and orientation determination device 100 based on the IMU data. In some example embodiments, operation 804 may be implemented using the state estimation module 208. At operation 806, the VIN module 112 accesses video data from the image capture device 102. At operation 808, the VIN module 112 adjusts the first estimated spatial state of the position and orientation determination device 100 based on the video data to generate a second estimated spatial state. In some example embodiments, operation 808 may be implemented using the feature detection module 202 and the feature matching module 204. At operation 810, the VIN module 112 accesses radio-based sensor data (e.g., GPS data, Bluetooth data, WiFi data, UWB data) from the radio-based sensor 106. At operation 812, the VIN module 112 triangulates the location or spatial state of the position and orientation determination device 100 based on the radio-based sensor data. At operation 814, the VIN module 112 updates the second estimated spatial state of the position and orientation determination device 100 based on the triangulated location. In some embodiments, the operation 814 may be implemented using the state estimation module 208… and [0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.).
Referring to claim 53, Broaddus discloses further comprising implementing the step of matching the fused sensor data with the reference data in the reference data database (20) by matching a sensor data and the reference data in the spatial direction of the user or by transforming a classifier trained with matched sensor image and reference image pairs (Broaddus- [0071]; At operation 802, the VIN module 112 accesses IMU data from the inertial sensor 104. At operation 804, the VIN module 112 computes a first estimated spatial state of the position and orientation determination device 100 based on the IMU data. In some example embodiments, operation 804 may be implemented using the state estimation module 208. At operation 806, the VIN module 112 accesses video data from the image capture device 102. At operation 808, the VIN module 112 adjusts the first estimated spatial state of the position and orientation determination device 100 based on the video data to generate a second estimated spatial state. In some example embodiments, operation 808 may be implemented using the feature detection module 202 and the feature matching module 204. At operation 810, the VIN module 112 accesses radio-based sensor data (e.g., GPS data, Bluetooth data, WiFi data, UWB data) from the radio-based sensor 106. At operation 812, the VIN module 112 triangulates the location or spatial state of the position and orientation determination device 100 based on the radio-based sensor data. At operation 814, the VIN module 112 updates the second estimated spatial state of the position and orientation determination device 100 based on the triangulated location. In some embodiments, the operation 814 may be implemented using the state estimation module 208… and [0058]; The position and orientation determination device 100 provides a spatial state of the display device 500 over time. The spatial state includes, for example, a geographic position, orientation, velocity, and altitude of the display device 500. The spatial state of the display device 500 can then be used to generate and display AR content in the display 502. The location of the AR content within the display 502 may also be adjusted based on the dynamic state (e.g., position and orientation) of the display device 500 in space over time relative to stationary objects sensed by the image capture device(s) 102.). Thus, meets “implementing the step of matching the fused sensor data with the reference data in the reference data database (20) by matching a sensor data and the reference data in the spatial direction of the user”.
Allowable Subject Matter
Claims 43-48 are allowed.
The following is a statement of reasons for the indication of allowable subject matter: Claims 43-48 are allowed since certain key features of the claimed invention are not taught or fairly suggested by prior art.
Referring to claim 43, the prior art of record, however, does not teach, disclose or suggest the claimed limitations of (in combination with all other limitations in the claim), “A system (1) for generating and displaying a continuous and real-time augmented reality scene corresponding to a user's current position and orientation, the system (1) consisting of:
a head-worn device (10) for a user, comprising
a mounting element (100) for stable attachment to the head, one or more sensors (101) for sensing physical characteristics of the user and the user's environment and outputting data corresponding thereto, the one or more sensors (101) selected from the group consisting of:
a sonar (1010), an inclination sensor (1011) for determining spatial direction, a position sensor (1012) for determining absolute position of the user, and at least one an environmental sensor (1013) for sensing external environmental characteristics, wherein the at least one environmental sensor (1013) is selected from the group consisting of: a flow sensor, an air flow sensor, a temperature sensor, a smoke sensor, a motion sensor, a presence sensor, an integrated computing and communication unit (103), and a display (102) for displaying the augmented reality scene;
a reference data database (20) containing reference data, wherein the reference data is formed by sonar images and/or two-dimensional camera images of different spatial directions, wherein the data sensed by the one or more sensors (101) in one spatial direction can be matched with the reference data in the one spatial direction;
a data storage unit (30) configured for storing at least image reference image pairs provided by the one or more sensors (101) matched in the spatial direction, the data storage unit (80) having a sub-unit for storing processed sensor data and/or the reference data;
a fixed, static sensor assembly (40) and/or a fixed, movable sensor assembly (50) in communicative connection with the integrated computing and communication unit (103) for sensing physical characteristics of the environment, wherein the sensor assembly (40, 50) comprises a plurality of sensors whose fields of view are partially overlapping, and the reference data are data in a given spatial direction, continuously or predetermined, sensed by the installed, static sensor assembly (40) and/or the installed, movable sensor assembly (50), and/or data in a given spatial direction, continuously determined, sensed by the plurality of sensors; and
an external processing unit (60) which comprises a display unit (601) displaying an augmented reality scene and/or the absolute position of the user, wherein the display unit (601) is in communicative connection with the integrated computing and communication unit (103) and/or an external computing and communication unit (600);
wherein the integrated computing and a communication unit (103) is in communicative connection with the one or more sensors (101), the reference data database (20), the data storage unit (80) and the display (102),
wherein the sonar (1010) is arranged in a center of the head-worn device (10), and
wherein the system further comprises a plurality of reference data databases (20), wherein each reference data database of the plurality of reference data databases contains different types of the reference data, wherein the different types of the reference data can be combined with each other, based on the spatial direction, to correspond to the data sensed by the one or more sensors (101)”.
Referring to claims 44-48 are allowable based upon dependent on the independent claim 43.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT D AU whose telephone number is (571)272-5948. The examiner can normally be reached M-F. General 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT D AU/Examiner, Art Unit 2624
/MATTHEW A EASON/Supervisory Patent Examiner, Art Unit 2624