DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/03/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown et al. (US 20230306690) in view of Guo et al. (US 20230343091).
Regarding claim 1, Brown discloses A mixed reality (MR) system (Brown, “[0063] FIG. 4 is a network diagram illustrating a network environment 400 suitable for operating an AR wearable device (such as glasses 100) and a PM system (such as a scooter 300), according to some examples”), comprising:
a recognition device (Brown, “[0063] FIG. 4 is a network diagram illustrating a network environment 400 suitable for operating an AR wearable device (such as glasses 100)”); and
an object of interest that constitute a real-world environment (Brown, fig. 4, “[0065] A user 402 operates the glasses 100 and the scooter 300. The user 402 may be a human user (e.g., a human being), a machine user (e.g., a computer configured by a software program to interact with the glasses 100 and the scooter 300), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human). [0068] The AR application can also provide the user 402 with an experience associated with operation of the scooter 300 in addition to presenting information provided by the scooter 300”),
a visual representation of the object of interest is capable of being superimposed in a virtual (VR) environment (Brown, “[0005] the term “augmented reality” or “AR” refers to both augmented reality and virtual reality as traditionally understood, unless the context indicates otherwise. [0083] If the locations of any PM systems have been received or determined, relevant information is displayed to the user via the glasses 100. For example, if a PM system is located in a direction that is within the field of view of the glasses 100, a virtual reality effect or object or graphical representation such as a scooter icon 610 is displayed by the glasses 100 in a direction in the field of view that corresponds to the direction in which the PM system is located”) wherein:
the recognition device and the object of interest are capable of communicating with each other (Brown, “[0040] In a PM system and AR wearable device integration, the PM system and the AR wearable device are in communication with each other. Each, individually, or through each other, the PM system and the AR wearable device may also be in communication with other devices (such as mobile phones) or with networks containing other devices (such as servers)”).
On the other hand, the above embodiment of Brown fails to explicitly disclose but another embodiment of Brown discloses the recognition device is capable of estimating a visual pose of the object of interest without pre-storing characteristics of the object of interest and configured to re-project the visual representation of the object of interest into the VR environment based on the estimated visual pose (Brown, ““[0040] The PM system may provide the AR wearable device with telemetry information such as speed, acceleration, position of the controls, user position and pose, and battery level. [0069] the server 410 is used to detect and identify the physical object 406 based on sensor data (e.g., image and depth data, location) from the glasses 100 or the scooter 300 and to determine a position or pose of at least one of the glasses 100, the scooter 300, and the physical object 406 based on the sensor data. The server 410 can also retrieve or generate a virtual object 414 based on the pose and position of the glasses 100, the scooter 300, the physical object 406, and, in some implementations, of the user device 404. The server 410 or the user device 404 communicates the virtual objects 414 to the glasses 100, which can then display the virtual objects 414 to the user 402 at an appropriate time. The object recognition, tracking, virtual object generation and AR rendering can be performed on either the glasses 100, the scooter 300, the user device 404, the server 410, or a combination thereof”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the embodiments of Brown. That is, applying the generating a virtual object based on the pose and position of the scooter of the second embodiment to the glasses in the first embodiment of Brown. The motivation/ suggestion would have been to provide the virtual object via a more compact integrated system without a remote server.
On the other hand, Brown fails to explicitly disclose but Guo discloses wherein the visual representation of the object of interest matches the visual pose of the object of interest (Guo, “[0022] The comparison system displays the virtual object in the user interface over the detected physical object (the protein bar) such that the displayed virtual object has a three-dimensional orientation of a detected physical marker that is associated with the detected physical object. For instance, this aligns the displayed virtual object with an orientation of the detected physical object based on a location of the image capture device”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Brown and Guo, to include all limitations of claim 1. That is, applying the aligning a virtual object to the physical object of Guo to the AR system of Brown. The motivation/ suggestion would have been The physical objects (real objects) are comparable using virtual objects (computer generated objects) that display information about the physical objects (Guo, [0004]).
Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown et al. (US 20230306690) in view of Guo et al. (US 20230343091), and further in view of HILL et al. (US 20220020217).
Regarding claim 2, Brown in view of Guo discloses The MR system of claim 1.
On the other hand, Brown in view of Guo fails to explicitly disclose but HILL discloses the recognition device comprises a first wideband module, and the object of interest comprises a second wideband module; and the first wideband module of the recognition device and the second wideband module of the object of interest are capable of communicating with each other via wideband radio (HILL, “[0078] The HMD 805 and the tracked object 810 are able to communicate over a wireless connection (e.g., NFC, Bluetooth, wideband radio connection, etc.) or perhaps even a wired connection. The tracked object 810 is able to use this connection to transmit its pose information to the HMD 805. Based on the received pose information from the tracked object 810, the HMD 805 is able to then identify where the tracked object 810 is located in relation to the HMD 805”. Therefore, the wideband radio connection indicates each of the HMD and the tracked object has a wideband module).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Brown in view of Guo and HILL, to include all limitations of claim 2. That is, applying the wideband radio connection of HILL to the communication between the HMD and the object of Brown in view of Guo. The motivation/ suggestion would have been providing the ability to dynamically and in real-time modify occluders based on changing conditions in the MR scene (e.g., changes in the user's body pose) with reduced computational expense relative to conventional systems (HILL [0050]).
Regarding claim 3, Brown in view of Guo and HILL discloses The MR system of claim 2.
Brown further discloses the recognition device further comprises a detecting module, which is configured to detect a presence of the object of interest (Brown, “[0033] The AR wearable device can detect and highlight obstacles, traffic control devices, and hazards to warn the user of their presence”).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown et al. (US 20230306690) in view of Guo et al. (US 20230343091), HILL et al. (US 20220020217), and further in view of SALEH (US 20240013601) or WANG et al. (US 20230266433).
Regarding claim 4, Brown in view of Guo and HILL discloses The MR system of claim 3.
On the other hand, Brown in view of Guo and HILL fails to explicitly disclose but SALEH discloses the detecting module is configured to send out periodical beacons; the second wideband module is configured to listen to the periodical beacons and to send a response back to the detecting module in response to the periodical beacons; and the detecting module is configured to receive the response from the second wideband module so as to detect the presence of the object of interest (SALEH, “[0071] In an embodiment of the invention, the wireless communication between the UD 30 and the SACD 20 is based on at least one of the following wireless technologies: Radio Frequency Identification (RFID), Bluetooth Low Energy (BLE), Near Field Communication (NFC), Wi-Fi, 3/4/5G, and Ultra-Wide Band (UWB). [0080] The wireless technology used is preferably selected from BLE, NFC, RFID, and UWB. [0104] the SACD 20 sends access control notification signals (preferably on a continuous and periodic manner) (142), the UD 30 scans for and processes the access control notification signals (144), the UD 30 sends an access authorization request to the SACD with the UD ID (or user ID) only if the UD 30 is worn/used by the user (and/or face shield is being secured depending on the application) and the user is in compliance with any other H&S requirements as determined by the UD 30 (146), the SACS grants access to the user inside the controlled zone in case of successful authentication and compliance with the H&S requirements including wearing the UD 30 (148)”. Therefore, granting access indicates a UD is detected).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined SALEH into the combination of Brown, Guo and HILL, to include all limitations of claim 4. That is, applying the communication between the SACD and UD of SALEH to the system of HILL and Brown, Guo. The motivation/ suggestion would have been to provide a user device comprises a smart access control unit (SACU) comprising a communication unit configured/adapted to be in wireless communication with a smart access control device (SACD) for controlling access of the user to a controlled zone based on said wireless communication (SALEH, [0005]).
Furthermore, WANG also discloses “[0021] the invention may provide an asset-tracking system for tracking a target tag in a space, wherein the target tag periodically emits a target beacon signal, wherein the asset-tracking system comprises a plurality of listener nodes arranged in the space and configured to detect the target beacon signal, wherein the asset-tracking system comprises a control system, wherein the control system has access to (i) listener location data and to (ii) map data, wherein in an operational mode: (a) the control system (is configured to) determines the presence of an object, wherein an object tag is associated to the object, and wherein the object tag is configured to emit an object beacon signal, wherein the plurality of listener nodes are configured to detect the object beacon signal and to provide a related object signal to the control system”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined WANG into the combination of Brown, Guo and HILL, to include all limitations of claim 4. That is, applying the determining the presence of an object of WANG to the system of HILL and Brown, Guo. The motivation/ suggestion would have been the control system may (be configured to) determine the presence of an object (in the space), especially wherein an object tag is associated to the object (WANG, [0009]).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown et al. (US 20230306690) in view of Guo et al. (US 20230343091), HILL et al. (US 20220020217), and further in view of OHASHI et al. (US 20210385654).
Regarding claim 6, Brown in view of Guo and HILL discloses The MR system of claim 3.
On the other hand, Brown in view of Guo and HILL fails to explicitly disclose but OHASHI discloses the second wideband module is configured to actively send out a probe request; and the detecting module is configured to receive the probe request from the second wideband module and to send a probe response back to the second wideband module to confirm a reception of the presence of the object of interest (OHASHI, fig.4, “[0039] FIG. 2 is a sequence diagram illustrating an example of a basic flow of the authentication process according to the present embodiment. As illustrated in FIG. 2, the portable device 100 first transmits a ranging trigger signal (Step S103). According to the present embodiment, for example, the portable device 100 transmits a signal (ranging trigger signal) for instructing to transmit a first ranging signal before the control device 200 transmits the first ranging signal. Note that, the ranging trigger signal is also an example of the ranging signal. For example the UWB signal is used for the ranging trigger signal. [0040] Next, when the ranging trigger signal is received, the control device 200 transmits a ranging request signal for requesting a response for the ranging process, as the first ranging signal (Step S106). For example the UWB signal is used for the ranging request signal”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined OHASHI into the combination of Brown, Guo and HILL, to include all limitations of claim 6. That is, applying the communication illustrated in fig.4 of OHASHI to the system of HILL and Brown, Guo. The motivation/ suggestion would have been to provide a novel and improved control device and control method that make it possible to prevent signals from being mixed up during transmission/reception of the signals to be used for an authentication process between devices (OHASHI, [0005]).
Claim(s) 25, 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown2 (US 20220206102) in view of Li et al. (US 20100166317), and further in view of Johnson et al. (US 11200745).
Regarding claim 25, Brown2 discloses A method for visual pose determination of an object of interest in a mixed reality (MR) system, comprising: advertising a presence of the object of interest to a recognition device (Brown2, “[0024] Advanced AR technologies, such as computer vision and object tracking, may be used to produce a perceptually enriched and immersive experience. Object recognition and tracking algorithms are used to detect an object in a digital image or video, estimate its orientation or pose, and track its movement over time”);
transmitting data between the object of interest and the recognition device via wideband radio (Brown2, “[0074] The example tracking and display system 400, as shown in FIG. 4, includes a plurality of ultra-wideband (UWB) pulse transmitters 620 in wireless communication with one or more UWB receivers 680”), wherein:
the object of interest constitutes a real-world environment and a visual representation of the object of interest is capable of being superimposed into a virtual (VR) environment (Brown2, “Abstract, One or more ultra-wideband (UWB) transmitters are mounted to each movable object in a physical environment including at least two synchronized UWB receivers”). [0020] The rendering application presents a virtual element on a display as an overlay relative to the calculated current object location and in relative proximity to the determined current eyewear location. [0086] The rendering application 920 prepares a virtual element 700 for presentation on a display as an overlay relative to a movable object 610”); and
the transmitted data includes at least one characteristic of the object of interest, which is not pre-stored in the recognition device (Brown2, “[0113] In some example implementations, the pulse transmitter 620 includes a power supply (e.g., a battery), a pulse generator, a transmitter, an antenna, and a read-only memory (ROM) or a chip with read-write capability. The ROM includes an object identifier or stock keeping unit (SKU) associated with the movable object, and a predefined object mesh 611 (as shown near the round tabletop 610-1 in FIG. 7). In this example, data about the object mesh 611 (stored in the ROM) is included in the broadcast pulse, so that the object location application 910 utilizes the object mesh 611 in calculating the current objection location 615”).
On the other hand, Brown2 fails to explicitly disclose but Li discloses performing a coarse pose estimation of the object of interest; and performing a fine pose estimation of the object of interest based on the coarse pose estimation of the object of interest (Li, “The performance of pose determination may initially be done in a relatively coarse fashion using a statistical method such as the mechanism shown in FIG. 2. Based on the coarse pose information determined, fine pose information may then be accomplished using a structure based method thereby fusing statistical and structural methods”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Li and Brown2. That is, adding the fine pose estimation of Li to estimate the object pose of Brown2. The motivation/ suggestion would have been to provide a method of fine pose estimation of object of interest (Li, [0005] A method, apparatus and computer program product are therefore provided to enable an improved face detection mechanism).
On the other hand, Brown2 in view of Li fails to explicitly disclose but Johnson discloses reprojecting by the recognition device the visual representation of the object of interest into the VR environment (Johnson, claim 1, “reprojecting an image of the physical environment captured via one of the sensors of the device worn by the user based on the 3D model representing the object in the physical environment and the current viewpoint of the eye of the user, the reprojected image depicting a visualization of the object in the physical environment surrounding the user; and displaying, along with the virtual-reality scene, the reprojected image depicting the visualization of the object in the physical environment surrounding the user according to the current viewpoint of the eye of the user on the device worn by the user in response to the determination that the one or more alert criteria are satisfied”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Johnson into the combination of Li and Brown2, to include all limitations of claim 25. That is, adding the reprojecting step of Johnson to reproject the object of interest of Brown2. The motivation/ suggestion would have been to provide a feature that automatically alerts the user when notable events occur in his environment (Johnson, col.1, lines 54-56)
Regarding claim 26, Brown2 in view of Li and Johnson discloses The method of claim 25.
Brown2 further discloses the object of interest is a couch or a desk; and the recognition device is a movable device (Brown2, “[0025] The term “pose” refers to the static position and orientation of an object at a particular instant in time. [0118] In this example, a camera system coupled to the eyewear device 100 captures frames of video data as the eyewear moves through the physical environment”. Especially, Brown2 discloses a round tabletop in fig. 7, which is analogous to the coffee table in fig.2 of the instant application. These type of object of interest is a static object during pose determination, and can be moved to another location as well).
Allowable Subject Matter
Claim(s) 24 is allowed.
The following is an examiner’s statement of reasons for allowance:
The invention creates a recognition device within a mixed reality (MR) system comprising a camera, a first wideband module, a tracking module, and a rendering module.
US 20230258756 A1 to Brown teaches “Abstract, Example systems, devices, media, and methods for tracking movable objects and presenting virtual elements on a display in proximity to the movable objects. Ultra-wideband (UWB) transmitters are mounted to each movable object in an environment including at least two synchronized UWB receivers”.
Independent claims 24 is/are distinguished from Brown because of the combination of all the limitations in the independent claim, particularly the limitations “a tracking module configured to calculate a visual pose of the object of interest by matching the reference images of the object of interest to image features extracted from a segment of video flux provided by the camera; and a rendering module configured to compute a visual representation of the object of interest based on the calculated visual pose of the object of interest and the reference images of the object of interest”. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders the above claimed invention obvious.
Claim(s) 5, 7-23 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 5, it recites, the first wideband module is configured to send a characteristic request of the object of interest to the second wideband module once the first wideband module is notified of the presence of the object of interest; the second wideband module is configured to send at least one characteristic of the object of interest to the first wideband module in response to the characteristic request; the at least one characteristic of the object of interest is one of a group consisting of a three-dimensional (3D) model of the object of interest, a series of photos of the object of interest under different perspectives, and an identifier of the object of interest. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Regarding claim 7, it recites, the second wideband module is configured to send at least one characteristic of the object of interest to the first wideband module once the second wideband module receives the probe response; and the at least one characteristic of the object of interest is one of a group consisting of a 3D model of the object of interest, a series of photos of the object of interest under different perspectives, and an identifier of the object of interest. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Regarding claim 8, it recites, the second wideband module is configured to send at least one characteristic of the object of interest to the first wideband module after the presence of the object of interest is detected; and the at least one characteristic of the object of interest is one of a group consisting of a 3D model of the object of interest, a series of photos of the object of interest under different perspectives, and an identifier of the object of interest. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Response to Arguments
The rejection to claim 26 under 35 U.S.C. 112(a) is withdrawn in view of the amendment.
Applicant’s arguments with respect to claim(s) 1-26 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRACE Q LI/Primary Examiner, Art Unit 2618 3/20/2026