Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/15/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 800. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 100, 103, 106, 112a, 124, 203, 250, 255, 600, 1215, 1226, 1436, 1446. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The use of the terms IEEE 802.15.4, Wi-Fi, ZigBee, 6LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.11a, WirelessHART, MiWi, Ethernet, and HomePlug, which are trade names or marks used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Browy (US 20200210127 A1).
Regarding claim 1, Browy teaches a non-transitory computer readable storage medium including instructions that, when executed by a head-wearable device of an extended-reality system, cause the head-wearable device to perform:
in response to initiation of an artificially intelligent assistant, capturing contextual data, the contextual data including one or more of image data and audio data (par. 0123: “Referring to FIG. 32, in various embodiments, each member is connected, and each member becomes another sensing “node” to the overall system, providing data not only pertinent to that operators biometrics, but also information pertaining to the environment around such operator, such as for evidence collection, personnel or structure identification, video/audio/photo capture, thermal/IR/SWIR imaging, simultaneous localization and mapping (“SLAM”), localization via wireless connectivity triangulation, GPS, and/or IP address, traffic or congestion sensing/reporting, access to remote databases such as crime or healthcare databases, sensing or receiving mapping data pertinent to the world around each operator, etc.”);
determining, based on the contextual data, a contextual cue (par. 0189: “the user of the mixed reality system is able to use visual and audio cues associated with the real environment to experience and interact with the corresponding virtual environment.”);
providing a portion of the contextual data and the contextual cue to the artificially intelligent assistant (par. 0169: “Learning networks, neural networks, and/or so-called “artificial intelligence” (or “AI”) computing configurations may be utilized to live stream adaptive soldier architecture to learn what operational information is likely to increase lethality, survivability, and mobility. This may be accomplished via machine learning, with the soldier being given a training mission and the model running a series of parameters and test cases; based on the output data from the training event, the system may be configured to optimize the heads-up display aspects of the wearable computing system (2, 6) based upon the level of data showed to the individual. This is a way to personalize the displayed data fidelity level to the particular user. Another implementation is the use of the machine learning model to dynamically change the data received and displayed in stressful situations, reducing the cognitive load on the user. Virtual assistants, or artificially-synthesized characters, such as that depicted in FIG. 90 and described in the aforementioned incorporated references, may be utilized to assist in efficient communication using the subject wearable computing configurations (2, 6), in roles such as general assistant, supervisor, colleague, and the like.”);
determining, by the artificially intelligent assistant, a user request based on the portion of the contextual data and the contextual cue (par. 0170: “In other words, the system may be configured such that a commander watches a multi perspective information feed through his wearable computing system (2, 6) and then with the overall picture in mind provides his local device with a hand gesture which a gesture recognition machine learning technology configuration captures this motion and interprets, based on the application definition of that gesture, to execute the desired task based on this prior determined interaction method.”);
receiving a response to the user request, wherein the response is generated using a machine-learning model (par. 0112: “FIGS. 106A-106D illustrate components of an example mixed reality system that can be used to generate and interact with a mixed reality environment, according to some embodiments.”; par. 0183: “if an object in the virtual environment is located at a first coordinate at time t0, and has certain programmed physical parameters (e.g., mass, coefficient of friction); and an input received from user indicates that a force should be applied to the object in a direction vector; the processor can apply laws of kinematics to determine a location of the object at time t1 using basic mechanics.”); and
causing the head-wearable device to present the response (par. 0184: “Output devices, such as a display or a speaker, can present any or all aspects of a virtual environment to a user. For example, a virtual environment may include virtual objects (which may include representations of inanimate objects; people; animals; lights; etc.) that may be presented to a user.”).
Regarding claim 2, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein the response is one or more of a textual response, an audible response, and a visual response (par. 0183: “In maintaining and updating a state of a virtual environment, the processor can execute any suitable software, including software relating to the creation and deletion of virtual objects in the virtual environment; software (e.g., scripts) for defining behavior of virtual objects or characters in the virtual environment; software for defining the behavior of signals (e.g., audio signals) in the virtual environment; software for creating and updating parameters associated with the virtual environment; software for generating audio signals in the virtual environment; software for handling input and output; software for implementing network operations; software for applying asset data (e.g., animation data to move a virtual object over time); or many other possibilities.”).
Regarding claim 3, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein the response includes identification of a target object and a follow-up action associated with the target object to be performed by the head-wearable device (par. 0124: “For example, an operator utilizing a connected wearable computing component (2) with direct connectivity to remote experts may encounter an unconscious patient who appears to be in cardiac arrest; the operator may ask for expert emergency medicine triage help, and an expert may come into the scene, such as a video teleconference and/or avatar presentation appearing in a portion of the operator's computing component (2) field of view, along with audio; facial recognition, other biometrics, specialized emergency responder patient smartphone access, and/or simple patient wallet identification card information may be utilized to identify the patient, and securely connected resources may be utilized to establish that the patient is a known heroin addict, and from what the appearing emergency medicine expert can see from the shared field of view of the operator, seems to be overdosed and close to death—time to urgently administer anti-opiate naloxone hydrochloride injection drug product such as that sold under the tradename NarCan.).
Regarding claim 4, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein the portion of the contextual data is formed by compressing the contextual data (par. 0131: “In various embodiments, training system services may be remotely hosted resources, and may include, for example: a relatively comprehensive database, which may be referred to as a “data lake”, for the storage of user account and training performance data; a file store for collecting and sharing training scenarios; available server resources earmarked for cloud hosting of TSS/S training servers as needed; access to what may be termed an “Authoritative Lifestream World Map” (or “LWM”), which contains data for use in training scenario creation and processing raw data stream captured from a wearable component (2) into a preferred LWM format.” NOTE: saving to a file format necessitates data compression.).
Regarding claim 5, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein determining, based on the contextual data, the contextual cue comprises:
determining a region of interest within the image data, the region of interest identifying a portion of the image data associated with the audio data (par. 0189: “the user of the mixed reality system is able to use visual and audio cues associated with the real environment to experience and interact with the corresponding virtual environment.”); and
cropping the image data based on the region of interest to form cropped image data (par. 0131, as above in claim 4 rejection).
Regarding claim 6, Browy teaches the non-transitory computer readable storage medium of claim 5, wherein determining, based on the contextual data, the contextual cue further comprises:
detecting, based on the cropped image data, one or more of text and text locations (par. 0121: “Referring to FIG. 15, in various embodiments translation technologies such as those available for translating language-to-text, and text-to-different-language, may be utilized to facilitate the real-time or near-real-time involvement of members who speak language different from those of the other participants in a meeting.”); and
determining one or more of a text and text order (par. 0121, as above).
Regarding claim 7, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein:
the user request is a translation request (par. 0121: “As noted above, language may be translated, such as by automated settings, to provide access and utility in multi-lingual meeting environments.”); and
the response generated by the machine-learning model is a translation of one or more of the portion of the contextual data and the contextual cue (par. 0121, as above).
Regarding claim 8, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein the machine-learning model is selected from a plurality of machine-learning models, and determining the user request based on the portion of the contextual data and the contextual cue further comprises:
determining at least one machine-learning model from the plurality of machine-learning models for generating the response based on the user request (par. 0231: “In some embodiments, mixed reality computing architecture 8400 may include one or more modules, which may be components of mixed reality computing architecture 8400.”);
selecting the at least one machine-learning model as the machine-learning model (par. 0231: “In some embodiments, instructions executed by a module can be a thread running within mixed reality computing architecture 8400. In some embodiments, instructions executed by a module may run within the same process address space and/or memory space as other components of mixed reality computing architecture 8400. In some embodiments, instructions executed by a module may run in a different process address space and/or memory space as other components of mixed reality computing architecture 8400. In some embodiments, instructions executed by a module may run on different hardware than other components of mixed reality computing architecture 8400. For example, instructions executed by one or more modules of mixed reality computing architecture 8400 may run on mixed reality system 112 and/or 200, while other components of mixed reality computing architecture 8400 may run on a remote server.”); and
providing the user request and one or more of the portion of the contextual data and the contextual cue to the machine-learning model (par. 0231: “In some embodiments, instructions executed by and/or data structures stored in modules within mixed reality computing architecture 8400 may communicate with other components of mixed reality computing architecture 8400 (e.g., with instructions executed by and/or data structures stored in other modules).”).
Regarding claim 9, Browy teaches the non-transitory computer readable storage medium of claim 8, wherein the plurality of machine-learning models includes one or more of an on-device machine-learning model (par. 0231: “In some embodiments, instructions executed by and/or data structures stored in modules within mixed reality computing architecture 8400 may communicate with other components of mixed reality computing architecture 8400 (e.g., with instructions executed by and/or data structures stored in other modules).”) and a remote machine-learning model (par. 0221: “Other computationally intensive tasks that may not require low-latency communications may also be offloaded to a remote server, which may transmit results back to individual mixed reality systems. For example, machine learning algorithms may be offloaded to a remote server (e.g., remote operational server 5702, remote tactical server 5704, remote strategic server 5706, and/or a data lake 5708).”).
Regarding claim 10, Browy teaches the non-transitory computer readable storage medium of claim 1, wherein the contextual data includes sensor data and gestures (par. 0189: “ the user of the mixed reality system is able to use visual and audio cues associated with the real environment to experience and interact with the corresponding virtual environment. As an example, while a user of VR systems may struggle to perceive or interact with a virtual object displayed in a virtual environment—because, as noted above, a user cannot directly perceive or interact with a virtual environment—a user of a MR system may find it intuitive and natural to interact with a virtual object by seeing, hearing, and touching a corresponding real object in his or her own real environment.”).
Claim 11 is substantially similar to claim 1, and differs only in that it teaches the head-wearable device mentioned in claim 1, as opposed to a storage medium containing instructions for said head-wearable device. Therefore, claim 11 is rejected on similar grounds to claim 1.
Claim 12 is substantially similar to claim 2, and differs only in that it depends from claim 11 as opposed to claim 1. Therefore, claim 12 is rejected on similar grounds to claim 2.
Claim 13 is substantially similar to claim 3, and differs only in that it depends from claim 11 as opposed to claim 1. Therefore, claim 13 is rejected on similar grounds to claim 3.
Claim 14 is substantially similar to claim 4, and differs only in that it depends from claim 11 as opposed to claim 1. Therefore, claim 14 is rejected on similar grounds to claim 4.
Claim 15 is substantially similar to claim 5, and differs only in that it depends from claim 11 as opposed to claim 1. Therefore, claim 15 is rejected on similar grounds to claim 5.
Claim 16 is substantially similar to claim 1, and differs only in that it teaches a method, as opposed to a storage medium containing instructions for said method. Therefore, claim 16 is rejected on similar grounds to claim 1.
Claim 17 is substantially similar to claim 2, and differs only in that it depends from claim 16 as opposed to claim 1. Therefore, claim 17 is rejected on similar grounds to claim 2.
Claim 18 is substantially similar to claim 3, and differs only in that it depends from claim 16 as opposed to claim 1. Therefore, claim 18 is rejected on similar grounds to claim 3.
Claim 19 is substantially similar to claim 4, and differs only in that it depends from claim 16 as opposed to claim 1. Therefore, claim 19 is rejected on similar grounds to claim 4.
Claim 20 is substantially similar to claim 5, and differs only in that it depends from claim 16 as opposed to claim 1. Therefore, claim 20 is rejected on similar grounds to claim 5.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN ALLEN BARHAM/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613