DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of app 17/478,771 (now is US patent US 12039674 B2). Although the claims at issue are not identical, they are not patentably distinct from each other because they both claim the same subject matters and limitations as explained below.
Claim 1, 5 is determined to be obvious in light of claim 1,4 of 17/478,771 (now is US patent US 12039674 B2) based on reasons below for having similar limitations.
Instant application claims 1, 5
17/478,771 claim 1, 4
1.A method, comprising: controlling,
1. A method, comprising: obtaining, by an electronic device, inertial data from an inertial sensor of the electronic device; and operating the electronic device
5. The method of claim 1, further comprising: operating the electronic device based on inertial data while the electronic device is disposed on a moveable platform during various motion states of the moveable platform, in part by modifying the usage of the inertial data according to a current motion state of the moveable platform.
based on the inertial data while the electronic device is disposed on a moveable platform during various motion states of the moveable platform, in part by modifying the usage of the inertial data according to a current motion state of the moveable platform, wherein the operating comprises: controlling,
…Claim 1 continued…during a first period of time, an output of an electronic device using a first simultaneous location and mapping (SLAM) system of the electronic device;
detecting, with the electronic device, a change in a motion state of the electronic device;
during a first period of time, the electronic device using a visual-inertial simultaneous location and mapping (SLAM) system;
detecting, with the electronic device, a change in a motion state of the movable platform;
responsive to detecting the change in the motion state,
temporarily operating both the first SLAM system and a second, different SLAM system of the electronic device while comparing outputs of the first and second SLAM systems; and
responsive to determining that the outputs of the first and second SLAM systems have been in agreement for at least a predetermined amount of time,
The method of claim 2 wherein the operating further comprises,
responsive to detecting the discrepancy and prior to the switching,
temporarily operating both the visual-inertial SLAM system and the visual-only SLAM system while comparing outputs of the visual-only SLAM system and the visual-inertial SLAM system.
switching to control,
and switching to controlling,
…Claim 1 continued…during a second period of time, the output of the electronic device using the second SLAM system.
during a second period of time and responsive to detecting the change in the motion state of the moveable platform, the electronic device using a visual-only SLAM system.
Claim 2, 3 is determined to be obvious in light of claim 2 of 17/478,771 (now is US patent US 12039674 B2) based on reasons below for having similar limitations.
Instant application claim(s) 2, 3
17/478,771 claim 2
2.The method of claim 1, wherein first SLAM system comprises a visual-inertial SLAM system.
3. The method of claim 2, wherein detecting the change in the motion state comprises detecting a discrepancy between visual data of the visual-inertial SLAM system and inertial data of the visual-inertial SLAM system.
2. The method of claim 1, wherein detecting the change in the motion state comprises detecting a discrepancy between visual data of the visual-inertial SLAM system and the inertial data of the visual-inertial SLAM system.
Claim 4 is determined to be obvious in light of claim 3 of 17/478,771 (now is US patent US 12039674 B2) based on reasons below for having similar limitations.
Instant application claim(s) 4
17/478,771 claim 3
4. The method of claim 3, wherein the visual data comprises an image-based rotation estimate for the electronic device, and the inertial data comprises a gyroscope-based rotation estimate for the electronic device.
3. The method of claim 2, wherein the visual data comprises an image-based rotation estimate for the electronic device, and the inertial data comprises a gyroscope-based rotation estimate for the electronic device.
Claim 6 is determined to be obvious in light of claim 7 of 17/478,771 (now is US patent US 12039674 B2) based on reasons below for having similar limitations.
Instant application claim(s) 6
17/478,771 claim 7
6. The method of claim 5, wherein the operating further comprises: displaying virtual content anchored to the moveable platform on which the electronic device is disposed.
7. The method of claim 1, wherein the operating comprises displaying virtual content anchored to the moveable platform on which the electronic device is disposed.
Claim 7 is determined to be obvious in light of claim 8 of 17/478,771 (now is US patent US 12039674 B2) based on reasons below for having similar limitations.
Instant application claim(s) 7
17/478,771 claim 8
7. The method of claim 5, wherein operating the electronic device based on the inertial data while the electronic device is disposed on the moveable platform during various motion states of the moveable platform comprises operating the electronic device based on the inertial data while the electronic device is worn or carried by a user that is disposed on the moveable platform during various motion states of the moveable platform.
8. The method of claim 1, wherein operating the electronic device based on the inertial data while the electronic device is disposed on the moveable platform during various motion states of the moveable platform comprises operating the electronic device based on the inertial data while the electronic device is worn or carried by a user that is disposed on the moveable platform during various motion states of the moveable platform.
Claims 8-13, they recite limitations similar in scope to the limitations of Claims 1-6 but as a device which determined to be obvious in light of claim 9-15 of 17/478,771 (now is US patent US 12039674 B1) which recite limitations similar in scope to the limitations of Claims 8-13 of 17/478,771 (now is US patent US 12039658 B1) based on same reason described above for having similar limitations as described above for Claims 1-6.
Claims 14-20, they recite limitations similar in scope to the limitations of Claims 1-6 but as a non-transitory computer-readable medium which determined to be obvious in light of claims 16-17 along with method claims of 3-6 of 17/478,771 (now is US patent US 12039674 B1) which recite limitations similar in scope to the limitations of Claims 16-17 along with method claims of 3-6 of 17/478,771 (now is US patent US 12039658 B1) based on same reason described above for having similar limitations as described above for Claims 1-6.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-12, 14-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pirchheim et al. (US 20150262029 A1, hereinafter Pirchheim), in view of Comer et al. (US 20190383937 A1, hereinafter Comer).
Regarding Claim 8, Pirchheim teaches a device comprising: a memory and at least one processor configured to (Pirchheim, Paragraph [0039], "The system may be a device 100, which may include one or more general purpose processors 161,
Image Processing module 171, Tracking module 181, Mapping module 180 ... and a memory 164): control, during a first period of time, an output of the device using a first simultaneous location and mapping (SLAM) system of the device (Pirchheim, Paragraph [0030], [0033], "Unconstrained SLAM (USLAM) systems handle both general and rotation only camera motion. Depending on the current camera motion, these SLAM systems apply either structure-from-motion or panoramic tracking and mapping techniques"; (0033], " According to certain aspects of the disclosure, a device while operating in panoramic SLAM mode can experience translational motion"; (0048], "The Tacking module 181 processes the video stream from the camera at frame-rate and tracks both general and rotation-only camera motion with respect to the active 3D map"; [0002], "The subject matter disclosed herein relates generally to location detection and specifically to Simultaneous Localization and Mapping (SLAM)"; It is noted that the device controls output (tracking of device position/orientation read on as output) using the panoramic SLAM system (first SLAM system) during a period of time);
detect, with the device, a change in a motion state of the device (Pirchheim,
Paragraph [0006], " Aspects of this disclosure provide techniques for robustly
detecting transitions between general and rotation only camera motion and the
corresponding panoramic/3D tracking and mapping modes"; [0037], "techniques
are described for detecting transitions from rotation-only to general motion while
the camera is tracked from a panorama map using a 3DOF rotation motion
model"); responsive to detection of the change in the motion state, [[ temporarily operate both the first SLAM system and a second, different SLAM system of the device while comparing outputs of the first and second SLAM systems; and responsive to a determination that the outputs of the first and second SLAM systems have been in agreement for at least a predetermined amount of time, switch to control, during a second period of time, the output of the device using the second SLAM system ]] (Pirchheim, Paragraph [0054], "In certain aspects, translational motion (indicated by the parallax angle) above a certain threshold may be used as an indicator to Switch from panoramic SLAM mapping to 6DOF SLAM mapping"; [0078], "The SLAM Switching module 910 may switch the Mapping module 180 from Panoramic SLAM module 175 to 6DOF SLAM module 173").
But Pirchheim does not explicitly disclose temporarily operate both the first SLAM
system and a second, different SLAM system of the device while comparing outputs of
the first and second SLAM systems; and responsive to a determination that the outputs
of the first and second SLAM systems have been in agreement for at least a
predetermined amount of time, switch to control, during a second period of time, the
output of the device using the second SLAM system
However, Comer teaches responsive to detection of the change in the motion state, temporarily operate both the first SLAM system and a second, different SLAM system of the device while comparing outputs of the first and second SLAM systems (Comer, Paragraph [0006], "identify a plurality of SLAM devices available to an xR application, where each of the plurality of SLAM devices implements a corresponding one of a plurality of SLAM methods; designate a primary SLAM method among the plurality of SLAM methods; and use the primary SLAM method to execute the xR application"; [0056], "HMD 102 may start monitoring the new positional tracking source, check the signal- to-noise ratio (SNR), and compare the accuracy of the new SLAM method"; [0059), "In some cases, every available SLAM method maps environment 101, but only the primary (or currently active) SLAM method runs at full capacity, framerate, or bandwidth. This makes the latency of switching methods minimal, because secondary (or currently non-active) SLAM methods) may run in the background"); responsive to a determination that the outputs of the first and second SLAM systems have been in agreement for at least a predetermined amount of time, switch to control, during a second period of time, the output of the device using the second SLAM system (Comer, Paragraph [0056], "If the SNR of the new SLAM method is greater than a threshold, and the new SLAM method is more accurate than the current one, then the SLAM method may be switched using a soft handover technique"; [0008], "in response to a determination that a second SNR associated with the secondary SLAM method is greater than a first SNR associated with the primary SLAM method, and that the first and second SNRs are greater than a threshold value redesignate the secondary SLAM method as primary, and re-designate the primary SLAM method as secondary "; It is noted that since system is operating both SLAM methods concurrently (primary at full, secondary at reduced bandwidth read on as temporarily operate both), comparing their outputs (SNR and accuracy read on as comparing outputs), and switching when the secondary method proves more accurate over time (SNR threshold and accuracy determination read on as outputs in agreement for predetermined amount of time)
Pirchheim and Comer are analogous since both of them are dealing with SLAM systems for mobile devices with visual and inertial sensors, managing multiple SLAM modes for different camera motion scenarios and adapt tracking and mapping based on sensor data and contextual information. Pirchheim provided a way of detecting motion state changes (from rotation-only to general motion) by comparing visual and inertial sensor data to calculate a parallax angle. Comer provided a way of managing multiple SLAM methods by running them in parallel (one active, others in background), comparing their performance (SNR/accuracy), and switching between them using a soft handover to ensure stability. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the concurrent operation and verified switching methodology taught by Comer into modified invention of Pirchheim such that when system detects a motion state change from rotation-only to general motion, the system operates both the panoramic SLAM and the newly initialized 6DOF SLAM concurrently, it temporarily operates both SLAM modes to verify the new state (checking agreement/accuracy over a predetermined time) before fully switching control, thereby ensuring a smoother and more robust transition, thereby ensuring robust and reliable SLAM transitions that prevent tracking failures and improve localization accuracy during motion state changes. The motivation is to improve the robustness of SLAM mode switching and prevent loss of tracking during transitions, as discussed by Comer in Paragraph [0056] ("Switching SLAM algorithms based on context can improve the user experience").
Regarding Claim 9, The combination of Pirchheim and Comer teaches the
invention in Claim 8.
The combination further teaches wherein first SLAM system comprises a visual inertial SLAM system Pirchheim, Paragraph [0041], "the device 100 is a mobile/portable platform. The device 100 can include a means for capturing an
image. Such as camera 114 and may also include motion sensors 111, Such as
accelerometers, gyroscopes, electronic compass, or other similar motion sensing
elements."; [0006), "In contrast to existing vision-only methods such as model
selection algorithms, aspects of this disclosure may use one or more sensors,
including inertial (gyroscope, accelerometer), magnetic (compass), and vision
(camera) sensors"; [0041], "Mapping module 180 extends and refines the Global
SLAM map based on GOOF and panorama keyframes selected by the Tracking
module 181 "; It is noted that the SLAM system uses both visual data {from camera 114) and inertial data (from motion sensors 111 including gyroscopes and accelerometers), which constitutes a visual-inertial SLAM system).
Regarding Claim 10, The combination of Pirchheim and Comer teaches the
invention in Claim 9.
The combination further teaches wherein detecting the change in the motion
state comprises detecting a discrepancy between visual data of the visual-inertial SLAM
system and inertial data of the visual-inertial SLAM system (Pirchheim, Paragraph
[0008], "determining a parallax angle for the device, wherein the parallax angle is
determined by comparing the vision-based rotational motion angle and the
sensor-based rotational motion angle"; [0006), "In contrast to existing vision-only methods such as model selection algorithms, aspects of this disclosure may use one or more sensors, including inertial (gyroscope, accelerometer), magnetic (compass), and vision (camera) sensors"; [0054], "The detection of the translational motion between two camera views may be performed by comparing the rotational angle acquired using image processing techniques and sensors ... The parallax angle ... can be calculated by comparing the vision-based rotational motion angle with the sensor-based rotational motion angle").
Regarding Claim 11, The combination of Pirchheim and Comer teaches the
invention in Claim 10.
The combination further teaches wherein the visual data comprises an image based rotation estimate for the device, and the inertial data comprises a gyroscope based rotation estimate for the device (Pirchheim, Paragraph (0008], "determining a
vision-based rotational motion angle for the device, wherein the vision-based
rotational motion angle is determined by performing image processing on a
plurality of keyframes, determining a sensor-based rotational motion angle for the
device ... motion sensors may include gyroscopes"; [0081], "wherein the sensor based rotational motion angle is determined using one or more motion sensors";
[0054], "The rotational angle detected using image processing, also referred to as
vision-based rotational motion angle ... the rotational angle from the sensors, also referred to as sensor-based rotational motion angle").
Regarding Claim 12, The combination of Pirchheim and Comer teaches the
invention in Claim 8.
The combination further teaches wherein the at least one processor is further
configured to operate the device based on inertial data while the device is disposed on
a moveable platform during various motion states of the moveable platform (Pirchheim, Paragraph [0006), "In contrast to existing vision-only methods such as model
selection algorithms, aspects of this disclosure may use one or more sensors,
including inertial (gyroscope, accelerometer), magnetic (compass), and vision
(camera) sensors"; [0053], "Techniques are presented for monocular visual simultaneous localization and mapping (SLAM) based on detecting a translational motion in the movement of the camera using at least one motion sensor"; [0003], "The visual SLAM system may operate in different modes for tracking the device and building the map based on the movement of the device ... while the device is experiencing general motion ... Similarly, when the device is only rotating"; [0041], "In one embodiment, the device 100 is a mobile/portable platform ... may also include motion sensors 111, such as accelerometers, gyroscopes"), in part by modifying the usage of the inertial data according to a current motion state of the moveable platform (Pirchheim, Paragraph [0054], "In certain aspects, translational motion (indicated by the parallax angle) above a certain threshold may be used as an indicator to Switch from panoramic SLAM mapping to GOOF SLAM mapping";
[0036], "The USLAM system supports transition from a panorama map to another
3D map ... followed by the initialization of a new 3D map")
Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 8 but as a method and the combination of Pirchheim and Comer teaches all the limitations as of Claim 8. Therefore is rejected under the same rationale.
Regarding Claim 2, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale.
Regarding Claim 3, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale.
Regarding Claim 4, it recites limitations similar in scope to the limitations of Claim 11 and therefore is rejected under the same rationale.
Regarding Claim 5, it recites limitations similar in scope to the limitations of Claim 12 and therefore is rejected under the same rationale.
Regarding Claim 7, the combination of Pirchheim and Comer teaches the
invention in Claim 1.
The combination further teaches wherein operating the electronic device based
on the inertial data while the electronic device is disposed on the moveable platform
during various motion states of the moveable platform comprises operating the
electronic device based on the inertial data while the electronic device is worn or carried
by a user that is disposed on the moveable platform during various motion states of the
moveable platform (Pirchheim, Paragraph [0040], "The device 100 may be ...
wearable device (e.g., eyeglasses, watch, head wear, head mounted device (HMD)
or similar bodily attached device)"; [0006), "In contrast to existing vision-only methods such as model selection algorithms, aspects of this disclosure may use one or more sensors, including inertial (gyroscope, accelerometer), magnetic (compass), and vision (camera) sensors"; [0041], "In one embodiment, the device 100 is a mobile/portable platform ... may also include motion sensors 111, such as accelerometers, gyroscopes")
But Pirchheim does not explicitly disclose while the electronic device is worn or carried by a user that is disposed on the moveable platform.
However, Comer teaches operating the electronic device based on the inertial data while the electronic device is worn or carried by a user (Comer, Paragraph [0033], "In various embodiments, user 101 may wear HMD 102 around their heads and over their eyes, during execution of an xR application"; [0031], “The propagation component may receive angular velocity and accelerometer data from an Inertial Measurement Unit (IMU) built into the HMD”; [0034], "HMD 102 transmits Information to host IHS 103 regarding the state of user 101 (e.g., physical position, head orientation ... )")
Pirchheim and Comer are analogous since both of them are dealing with SLAM systems for mobile and wearable devices. Pirchheim provided a way of using inertial sensors to assist visual SLAM on a mobile platform. Comer provided a specific implementation of SLAM on a head-mounted display (HMO) worn by a user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the wearable HMO configuration taught by Comer into the device of Pirchheim such that the device is worn by a user, enabling hands-free operation and immersive ARNR experiences while still utilizing Pirchheim's robust motion detection. The motivation is to enable immersive augmented reality experiences where the user can move freely, as discussed by Comer in Paragraph [0003] ("The goal of virtual reality (VR) is to immerse users in virtual environments").
Regarding Claim 14, it recites limitations similar in scope to the limitations of claim 8 and the combination of Pirchheim and Comer teaches all the limitations as of Claim 8. And Pirchheim discloses these features can be implemented on a computer readable storage medium (Pirchheim, Paragraph [0008], “Aspects of the disclosure describe an example method, apparatus, non-transitory computer readable medium”; [0009], “In certain aspects, the non-transitory computer readable storage medium may include instructions executable by a processor for performing aspects of simultaneous localization and mapping (SLAM) as described herein”).
Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale.
Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale.
Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 11 and therefore is rejected under the same rationale.
Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 12 and therefore is rejected under the same rationale.
Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 7 and therefore is rejected under the same rationale.
Claim(s) 6, 13, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pirchheim et al. (US 20150262029 A1, hereinafter Pirchheim), in view of Comer et al. (US 20190383937 A1, hereinafter Comer) as applied to Claim 1, 8, 14 above respectively and further in view of Hare (US 20190178654 A1).
Regarding Claim 13, the combination of Pirchheim and Comer teaches the
invention in Claim 12.
The combination further teaches wherein the at least one processor is configured to operate the device by: displaying virtual content (Pirchheim, Paragraph [0044], "Virtual objects (e.g., text, images, video) may be inserted into the representation of a scene depicted on a device display").
But the combination does not explicitly disclose anchored to the moveable platform on which the device is disposed.
However, Hare teaches displaying virtual content anchored to the moveable platform on which the device is Disposed (Hare, Paragraph [0172], "This same method may be applied to anchor a virtual object to features of an object that is physically moving (e.g. a controller <read on moveable platform>)."; Paragraph [0025], "obtaining virtual object data indicating (1) an anchor relationship between a virtual object and the feature ... determining ... coordinates of the virtual object based on ... the feature ... and ... displacement... relative to the feature").
Hare and Pirchheim are analogous since both of them are dealing with tracking/mapping and presentation of virtual content in environments where the
device/object may move. Pirchheim provided a way of operating a device using visual/inertial tracking and switching SLAM modes based on motion state changes while the device is moving. Hare provided a way of anchoring a virtual object to features of an object that is physically moving such that the virtual object remains stable relative to the moving object. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the anchoring of virtual content to a physically moving object taught by Hare into the modified invention of Pirchheim such that when the device is disposed on (or moving with) a moveable platform, the system presents virtual content anchored to that moveable platform, thereby maintaining a stable relative placement of the virtual content during motion of the platform. The motivation is to keep virtual content stable relative to the relevant physical frame of reference during motion, as discussed by Hare in Paragraph [0172].
Regarding Claim 6, it recites limitations similar in scope to the limitations of Claim 13 and therefore is rejected under the same rationale.
Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 13 and therefore is rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20190007674 A1 MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE
US 20190346271 A1 LASER SCANNER WITH REAL-TIME, ONLINE EGO-MOTION ESTIMATION
US 20190219401 A1 PROBABILISTIC DATA ASSOCIATION FOR SIMULTANEOUS LOCALIZATION AND MAPPING
US 20210042958 A1 LOCALIZATION AND MAPPING UTILIZING VISUAL ODOMETRY
US 20200066045 A1 Multi-Device Mapping and Collaboration in Augmented-Reality Environments
US 20200082555 A1 ADAPTIVE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING WORLD-FACING CAMERAS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US 20190383615 A1 SCALABALE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US 20160026253 A1 METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY
US 20170243371 A1 IMAGE BASED TRACKING IN AUGMENTED REALITY SYSTEMS
US 20210049360 A1 CONTROLLER GESTURES IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US 10482677 B1 Distributed simultaneous localization and mapping (SLAM) in virtual, augmented, and mixed reality (xR) applications
US 20200066044 A1 Suggestion of Content Within Augmented-Reality Environments
US 20180239144 A1 SYSTEMS AND METHODS FOR AUGMENTED REALITY
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614