DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 19 November 2025 have been fully considered but they are not persuasive.
Applicant argues the primary reference, “Olwal neither teaches nor suggests modifying its attention-tracking system to generate commands for external IoT devices. Its architecture remains entirely self-contained, operating solely within the virtual display environment”. Examiner respectfully disagrees and respectfully directs Applicant’s attention to Olwal: ¶ [0080]: For example, the wearable computing device 100 may receive one or more wireless signals and use the wireless signals to communicate with other devices such as mobile computing device 202 and/or server computing device 204, or other devices within range of antennas 244. The wireless signals may be triggered via a wireless connection such as a short-range connection (e.g., Bluetooth connection or near-field communication (NFC) connection) or an Internet connection (e.g., Wi-Fi or mobile network), ¶ [0124]: In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 600 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system), and ¶ [0140]: The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
Claim Rejections - 35 USC § 102
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-2 and 4-11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Olwal et al. (US 2023/0086766 A1).
Regarding claim 1, Olwal discloses a rapid user input determination system comprising: a control device [Olwal: FIG. 2]; a head-mounted device signal-connected to the control device [Olwal: ¶ [0054] The wearable computing device 100 is depicted as AR glasses in this example. In general, the device 100 may include any or all components of systems 100 and/or 200 and or 600. The wearable computing device 100 may also be indicated as smart glasses representing an optical head-mounted display device designed in the shape of a pair of eyeglasses] and comprising at least one display and at least one eye-tracking unit [Olwal: ¶ [0007]: In an example embodiment, the detecting a defocus event associated with a first region of the content may be electronically detecting a user having previously focused on the first region of the content to no longer focus the first region (e.g., determined by a gaze trajectory of a user's eye or of the user's eyes and thus a viewing focus of the user)]; and a motion sensing device [Olwal: ¶ [0047]: In some implementations, the defocus event may include user-related motion such as a head turn, a head lift or lower, a device removal from the user (e.g., removing a wearable computing device), and/or any combination thereof; and ¶ [0065] The sensor system 214 may also include an inertial motion unit (IMU) sensor 220. The IMU sensor 220 may detect motion, movement, and/or acceleration of the wearable computing device 100. The IMU sensor 220 may include various different types of sensors such as, for example, an accelerometer, a gyroscope, a magnetometer, and other such sensors. In some implementations, the sensor system 214 may include screen embedded sensors that may detect particular user actions, motions, etc., directly from the virtual screen]; wherein, the control device is configured to receive a command as a user input and to output a corresponding content based on the command to the head-mounted device and the motion sensing device [Olwal: ¶ [0067]: In some implementations, the camera 224 may be a point tracking camera that can, for example, detect and follow one or more optical markers on an external device, such as, for example, optical markers on an input device or finger on a screen. The input may be detected by an input detector 215, for example]; the at least one display is configured to demonstrate a display content and to update the display content based on the corresponding content received from the control device [Olwal: ¶ [0067]], the eye-tracking unit is configured to track a movement and a motion trajectory of at least one of the user's eyes [Olwal: ¶ [0007]] to determine a visual focus of the user on the display content [Olwal: ¶ [0007]], the motion sensing device is configured to detect a motion or a posture of the user's hand and to transmit a hand- gesture information serving as the command to the control device [Olwal: ¶ [0067]], wherein the motion and the posture are respectively defined as a triggering action and a triggering posture, the control device combines the visual focus of the user with the triggering action or the triggering posture of the user as the command [Olwal: ¶ [0038]: For example, if the user performs an action that triggers an object to move, the user may follow the trajectory of the object. While the user looks away (i.e., defocus on content), the object continues to move in a trajectory. Thus, when the user refocuses on the content, the model 116 may generate and render focus transition markers to guide the user to the current position of the object], and wherein at least a portion of the motion sensing device is embedded in or detachably mounted on the head-mounted device [Olwal: FIG. 1], the control device is further configured to store and to output the command to at least one Internet of Things (IoT) device after the control device receives the command as the user input to the head-mounted device or the motion sensing device [Olwal: ¶ [0140]: The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet].
Regarding Claim 2, Olwal discloses all the limitations of Claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the head-mounted device further comprises at least one camera [Olwal: ¶ [0062]: In some implementations, the image sensor 216 is a red, green, blue (RGB) camera. In some examples, the image sensor 216 includes a pulsed laser sensor (e.g., a LiDAR sensor) and/or depth camera. For example, the image sensor 216 may be a camera configured to detect and convey information used to make an image. In some implementations, the image sensor 216 is an eye tracking sensor (or camera), such as eye/gaze tracker 218 that captures movements of an eye of a user accessing device 100, for example] and at least one inertial measurement unit [Olwal: ¶ [0065]: The sensor system 214 may also include an inertial motion unit (IMU) sensor 220. The IMU sensor 220 may detect motion, movement, and/or acceleration of the wearable computing device 100], the at least one camera is configured to detect an external environment of the head-mounted device to collect multiple images or multiple video records in real time [Olwal: ¶ [0067]: The sensor system 214 may also include a camera 224 capable of capturing still and/or moving images. In some implementations, the camera 224 may be a depth camera that can collect data related to distances of external objects from the camera 224], and to transmit the images or video records to the control device [Olwal: FIG. 2], the at least one inertial measurement unit is configured to detect a movement of the user's head [Olwal: ¶ [0109]: In some implementations, the tracking may be performed by the IMU 220. In some implementations, tracking may include assessing and tracking signals that include any one or more (or any combination of) gaze signals (e.g., eye gaze), head tracking signals], and a position of the display content demonstrated on the at least one display can be adjusted based on a detecting result of the inertial measurement unit [Olwal: ¶ [0109]: In some implementations, models 248 may be used to assess tracked focus or related signals to determine next steps and/or next focus events. Any combination of the above may be used to ascertain and track the focus of the user].
Regarding Claim 4, Olwal discloses all the limitations of Claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the motion sensing device comprises a depth camera, a radar, or a wristband [Olwal: ¶ [0067]: depth camera].
Regarding Claim 5, Olwal discloses all the limitations of Claim 2, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the motion sensing device comprises a depth camera, a radar, or a wristband [Olwal: ¶ [0067]: depth camera].
Regarding Claim 6, Olwal discloses all the limitations of Claim 3, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the motion sensing device comprises a depth camera, a radar, or a wristband [Olwal: ¶ [0067]: depth camera].
Regarding Claim 7, Olwal discloses all the limitations of Claim 1, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses a method of using the rapid user input determination system as claimed in claim 1 and comprising following steps: demonstrating a display content on a head-mounted device [Olwal: ¶ [0088]: he gaze target 310 may change over time as the user interacts with content 310, 312, 314, and/or other content configured for display in screen 302A]; determining a target selected by a user with the eye-tracking unit [Olwal: ¶ [0087]: the wearable computing device 100 may determine via eye/gaze tracker 218 (or other device associated with sensor system 214) that a user operating device 100 is paying attention to (e.g., focusing on) the content 306]; and detecting a triggering action or a triggering posture with the motion sensing device to confirm the target selected by the user [Olwal: ¶ [0089]: The interruption may be visual, audial, or tactile in nature. The interruption may trigger the eye gaze of the user to move to another focus or be removed from an original focus (e.g., gaze target). For example, the user may hear another user call out the user's name, as shown by indicator 318. The user may then defocus from content 306 in screen 302B and the focus (e.g., attention, gaze target) may change, as shown by map 316, which depicts the eye gaze change of the user. In this example, the eye/gaze tracker 218, for example, may detect that the user is looking beyond the screen 302B and may begin to trigger changes in presentation of content on device 100].
Regarding Claim 8, Olwal discloses all the limitations of Claim 7, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein a part of the display content interacts with an image of an external environment, the part of the display content corresponds to edges or contours of the image, or tracks a moving object displayed in the image [Olwal: ¶ [0104]: For example, if the user performs an action that triggers an object to move, the device 100 (and the user) may follow the trajectory of the object. While the user is defocused (e.g., looking away), the object will keep moving, and when the user refocuses (e.g., looks back), the device 100 may guide the user to the current position of the object using content, shapes, highlights, or other such marker].
Regarding Claim 9, Olwal discloses all the limitations of Claim 7, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the display content comprises a user interface, a virtual reality image or an augmented reality image [Olwal: ¶ [0122]: the virtual screen 226 is associated with an augmented reality device (e.g., wearable computing device 100) configured to provide a field of view that includes an augmented reality view (e.g., screen 302A) and a physical world view (e.g., 304A)]; wherein, the user interface includes multiple selection boxes, multiple symbols, selection boxes text strings or at least one button [Olwal: ¶ [0144]: In some implementations, one or more input devices included on, or connect to, the computing device 650 can be used as input to the AR space. The input devices can include, but are not limited to, a touchscreen, a keyboard, one or more buttons, a trackpad, a touchpad, a pointing device, a mouse, a trackball, a joystick, a camera, a microphone, earphones or buds with input functionality, a gaming controller, or other connectable input device]; and the virtual reality image or the augmented reality image includes an auxiliary image corresponding to at least one Internet of Things (IoT) device [Olwal: ¶ [0144]].
Regarding Claim 10, Olwal discloses all the limitations of Claim 9, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein the display content dynamically displays a position of the user's visual focus on the display in real time and a content shown at the position overlays other contents of the display content [Olwal: ¶ [0121]: The device 100 may then obtain a model for the UI. The model for the UI may define a plurality of states of and user interactions associated with the UI. Based on the tracked focus of the user, the model of the attention, and a determined state, from the plurality of states, of the UI, the device 100 may trigger rendering, for a second time period, at least one focus transition marker (e.g., marker 326) overlaid on at least a portion of the second region of screen 302D and the content therein].
Regarding Claim 11, Olwal discloses all the limitations of Claim 9, and is analyzed as previously discussed with respect to that claim.
Furthermore, Olwal discloses wherein, when the user's visual focus is located on one of the selection boxes or the auxiliary images, a display state of the selection box or the auxiliary image changes, and a change of the display state involves color changes or blinking [Olwal: ¶ [0146]: In some implementations, one or more output devices included on the computing device 650 can provide output and/or feedback to a user of the AR headset 690 in the AR space. The output and feedback can be visual, tactical, or audio. The output and/or feedback can include, but is not limited to, vibrations, turning on and off or blinking and/or flashing of one or more lights or strobes, sounding an alarm, playing a chime, playing a song, and playing of an audio file].
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN R MESSMORE whose telephone number is (571)272-2773. The examiner can normally be reached Monday-Friday 9-5 EST/EDT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN R MESSMORE/Primary Examiner, Art Unit 2482