Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the response to this Office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application.
The Office has cited particular figures, elements, paragraphs and/or columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider each of the cited references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage disclosed by the Office.
Status of Claims
- Applicant’s Amendment filed March 22, 2024 is acknowledged.
- Claim(s) 1, 7, 15, 20 is/are amended
- Claim(s) 1-20 is/are pending in the application.
This action is FINAL
Information Disclosure Statement
The information disclosure statement (IDS) submitted on March 22, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lampen et al, “A Context-Aware Assistance Framework for implicit interaction with an Augmented human”, 10 July 2020, Computer Vision -ECCV 2020: 16th European Conference. Glasgow, UK, August 23-28, 2020, Proceedings (Lecture Notes in Computer Science) [retrieved on 2020-07-10]. ISBN: 978-3-030-58594-5, 20 pages. (NPL #7 of IDS dated February 24, 2023) in view of Arora et al, U.S. Patent Publication No. 20210043005 and Gorur Sheshagiri et al, U.S. Patent Publication No. 20190026936.
Consider claim 1, Lampen teaches a method comprising: at an electronic device in communication with a display and one or more input devices (see Lampen page 99, lines 1-10, paragraph 3, line where Microsoft Kinect 2 and the Microsoft HoloLens. Therefore, on the one hand, area observations enabled by the depth sensor data of the different Kinects [23] and on the other hand, positional and rotational tracking of the user by the inertial measurement unit data of the HoloLens are available, page 94, paragraph 3 where a general encapsulated software architecture for the acquisition and management of context through the interpretation of sensor data is mandatory. Page 98, paragraph 4 where Besides the proposal of the general concept of an encapsulated architecture for context-aware assistances, the aim of this work is to apply the aforementioned architecture and develop an assistance which enables implicit interaction with an augmented human during the completion of manual tasks. Therefore, based on the underlying related work, we want to enhance the assistance approach presented by [24] due to the incorporation of knowledge derived by various sensors as well as the consideration of social interaction guidelines for the development of additional context-aware features. Therefore, the gap between context-awareness and process-oriented assistances for manual assembly use-cases is addressed):
presenting, using the display, a computer-generated environment including a real-world environment and a virtual agent with an affinity for one or more characteristics of the real-world environment (see Lampen figure 4-5 and page 99, lines 31-37 where Furthermore, to display an augmented human on the HoloLens an Universal Windows Platform application was created. The scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones; page 100 paragraph 4.2 Visibility Control. A controllable parameter that can enhance the effect of the assistance, is the visibility of the task performed by the augmented human … If the avatar is in the personal space of the user (≤1.20 m), only the avatar’s arms are visualized, whereas within the intimate space (≤0.46 m), only the hands are displayed; page 98, paragraph 4 where based on the underlying related work, we want to enhance the assistance approach presented by [24] due to the incorporation of knowledge derived by various sensors as well as the consideration of social interaction guidelines for the development of additional context-aware features. Therefore, the gap between context-awareness and process-oriented assistances for manual assembly use-cases is addressed; page 95, lines 23-26 where The presented approach consists of basic classes, here: context entities, characteristics, here: context parameters, and relations between individuals, here: context relations (see Fig. 2). A typical context entity could be the user itself, a sensor or a component needed for a given task.; page 101 lines 5-7 Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists.; page 101 lines 16-19 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity; page 102 line 1-3 where Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters),
wherein the one or more characteristics include at least one of Furthermore, to display an augmented human on the HoloLens an Universal Windows Platform application was created. The scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones; page 100 paragraph 4.2 Visibility Control. A controllable parameter that can enhance the effect of the assistance, is the visibility of the task performed by the augmented human … If the avatar is in the personal space of the user (≤1.20 m), only the avatar’s arms are visualized, whereas within the intimate space (≤0.46 m), only the hands are displayed; page 98, paragraph 4 where based on the underlying related work, we want to enhance the assistance approach presented by [24] due to the incorporation of knowledge derived by various sensors as well as the consideration of social interaction guidelines for the development of additional context-aware features. Therefore, the gap between context-awareness and process-oriented assistances for manual assembly use-cases is addressed; page 95, lines 23-26 where The presented approach consists of basic classes, here: context entities, characteristics, here: context parameters, and relations between individuals, here: context relations (see Fig. 2). A typical context entity could be the user itself, a sensor or a component needed for a given task.; page 101 lines 5-7 Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists.; page 101 lines 16-19 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity; page 102 line 1-3 where Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters);
while presenting the computer-generated environment, detecting, using the one or more input devices, a first characteristic of the one or more characteristics of the real-world environment (see Lampen Abstract ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features; paragraph 4.2 page 101, lines 5-7 where Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. And page 101 lines 16-19 where pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)).; page 99, lines 1-10 where Context Acquisition. Considering the stated architecture of the context acquisition, two different types of sensors are exemplary connected within the context-aware augmented human assistance, namely the Microsoft Kinect 2 and the Microsoft HoloLens. Therefore, on the one hand, area observations enabled by the depth sensor data of the different Kinects [23] and on the other hand, positional and rotational tracking of the user by the inertial measurement unit data of the HoloLens are available. More precise, the related C# and Unity CIUs supply integrated context of the global position and rotation of the user within the Unity scene of the context model and furthermore, integrated context of changes in specified point of interests (POIs) within the context-model; and page 99, lines 18-21 where Similarly, for the position and rotation of the user a context parameter was added to the HoloLens sensor entity, as well as a context parameter of the related view frustum with the frustum quantities of the HoloLens) ; and
causing the virtual agent to perform a first action in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including a first affinity for the first characteristic (see Lampen figures 4-5, page 100, lines 3-7 where As an output of the states the human simulation data, comprising the updated pose of the avatar and the manipulated digital components, are further incorporated into the scene model and send to the subscribed HoloLens. By utilizing the functionality of context invokers, context-aware features can be realized. Page 101, lines 16-19 pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). Page 102, lines 1-3 Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters.).
Lampen does not specifically use the term “affinity”. Applicant’s disclosure appears to use the term “affinity” to describe a desired characteristic as opposed to an undesired “aversion” characteristic (see paragraph 0033 of Applicant’s original disclosure). As best understood by Examiner, Lampen’s disclosure of the virtual agent assisting to achieve proper/timely progress of a real environment task being performed by a real user and characterizing the real environment is considered to correspond to “affinity” of the virtual agent (see Lampen figures 1-2 and page 95, line 19-page 96, line 4 where for example presented approach consists of basic classes, here: context entities, characteristics, here: context parameters, and relations between individuals, here: context relations (see Fig. 2). A typical context entity could be the user itself, a sensor or a component needed for a given task. To further specify and describe a context entity, it can have an arbitrary amount of context parameters attached as attributes. According to the context atoms by [3], it incorporates a name, its value, the source, which detected its current value, and the time of the last detection.) Specifically, one of ordinary skill in the art would have readily recognized, without inventive inspiration, that context parameters and relationships programmed and attributed to a virtual agent to facilitate time savings and reduced errors would correspond to affinity of a virtual agent as required by the claim language.
Lampen is silent regarding wherein the one or more characteristics include at least one of lighting, or temperature. In the same field of endeavor, Gorur Sheshagiri teaches that a virtual assistance can provide warnings regarding real-world hazards such as obstacles, blind-spots, mismatch between real and virtual objects, hot or cold objects, electrical equipment, etc (see Gorur Sheshagiri paragraph 0131, 0139). Further Arora teaches AR/MR systems which allow virtual characters to toggle a light switch in a real environment, causing a light bulb in a real environment to turn on or off (see Arora paragraph 0006). One of ordinary skill in the art would have been motivated to have modified Lampen to have sensed lighting characteristics of a real-world environment or temperature characteristics of a real-world environment so as to facilitate determine sufficient/adequate lighting conditions to safely perform a task or determine a hazardous temperature condition so as to facilitate warning and providing alerts for avoiding hazardous conditions so as to safely perform a task. Further incorporation of the teachings of Gorur Sheshagiri/Arora would result in a virtual assistant capable of moving to toggle a light switch in a real environment or providing alerts directing a user towards/away from a lighting/temperature condition that could be helpful/hazardous to completion of a task corresponding to an affinity to avoiding hazardous conditions that interfere/hinder completion of a task in a timely, error free manner.
Consider claim 2, Lampen as modified by Gorur Sheshagiri and Arora teaches all the features of claim 1 and further teaches wherein the first action includes one of performing an activity, orienting the virtual agent or a virtual object in the computer-generated environment, placing the virtual object in the computer-generated environment, movement of the virtual agent or modifying the movement of the virtual agent, or dwelling in the computer-generated environment (see Lampen page 99, lines 33-37 where scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones and page 101, line 1-23 where Progress Control. The conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. A basic chapter functionality is enabled by the evaluation of the distance between the avatar’s end position after locomotion tasks and the user’s position. For manual tasks with an environment interaction, the provided final context of the observations of the POIs triggers the subsequent movement of the augmented human. Therefore, a more sophisticated chapter functionality besides the basic implementation is integrated. Attention Control. An attention control feature is added, with regard to the limited field of view (FOV) of the AR device. To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked ).
Consider claim 3, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 2, wherein the movement of the virtual agent in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including the first affinity for the first characteristic comprises moving toward or in one or more first regions of the computer-generated environment that the virtual agent favors and moving from or avoiding one or more second regions of the computer-generated environment that the virtual agent disfavors (see Lampen page 99, lines 33-37 Specifically virtual agent performs a pointing task based on identifying that the user is looking at the wrong direction ) .
Consider claim 4, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 2 and further teaches wherein modifying the movement of the virtual agent in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including the first affinity for the first characteristic comprises moving toward or in one or more first regions of the computer-generated environment that the virtual agent favors and moving from or avoiding one or more second regions of the computer-generated environment that the virtual agent disfavors (see Lampen page 99, lines 33-37 Specifically virtual agent performs a pointing task based on identifying that the user is looking at the wrong direction ).
Consider claim 5, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 2 and further teaches wherein the movement of the virtual agent in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including the first affinity for the first characteristic comprises creating a navigation plan for the virtual agent to increase time or distance of the navigation plan within one or more first regions of the computer-generated environment that the virtual agent favors (see Lampen page 101 where conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists) and
to decrease time or distance of the navigation plan within one or more second regions of the computer-generated environment that the virtual agent disfavors (see Lampen page 101 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked.).
Consider claim 6, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 2 and further teaches wherein modifying the movement of the virtual agent in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including the first affinity for the first characteristic comprises modifying a navigation plan for the virtual agent to increase time or distance of the navigation plan within one or more first regions of the computer-generated environment that the virtual agent favors (see Lampen page 101 where conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists) and
to decrease time or distance of the navigation plan within one or more second regions of the computer-generated environment that the virtual agent disfavors (see Lampen page 101 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked.)
Consider claim 7, Lampen as modified by Gorur Sheshagiri and Arora teaches an electronic device comprising: one or more processors (see Lampen page 95, lines 33-40 where suitable middleware technology and processing mechanisms are disclosed);
memory (Examiner takes Official Notice that software programming requires memory to store and execute); and
one or more programs stored in the memory (see Lampen page 94, line 10-12 where general encapsulated software architecture for the acquisition and management of context through the interpretation of sensor data is mandatory. Examiner takes Official Notice that software programming requires memory to store and execute.) and configured to be executed by the one or more processors, the one or more programs including instructions for performing operations including:
presenting, using a display, a computer-generated environment including a real-world environment and a virtual agent with an affinity for one or more characteristics of the real-world environment (see Lampen figure 4-5 and page 99, lines 31-37 where Furthermore, to display an augmented human on the HoloLens an Universal Windows Platform application was created. The scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones; page 100 paragraph 4.2 Visibility Control. A controllable parameter that can enhance the effect of the assistance, is the visibility of the task performed by the augmented human … If the avatar is in the personal space of the user (≤1.20 m), only the avatar’s arms are visualized, whereas within the intimate space (≤0.46 m), only the hands are displayed; page 98, paragraph 4 where based on the underlying related work, we want to enhance the assistance approach presented by [24] due to the incorporation of knowledge derived by various sensors as well as the consideration of social interaction guidelines for the development of additional context-aware features. Therefore, the gap between context-awareness and process-oriented assistances for manual assembly use-cases is addressed; page 95, lines 23-26 where The presented approach consists of basic classes, here: context entities, characteristics, here: context parameters, and relations between individuals, here: context relations (see Fig. 2). A typical context entity could be the user itself, a sensor or a component needed for a given task.; page 101 lines 5-7 Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists.; page 101 lines 16-19 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity; page 102 line 1-3 where Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters);
wherein the one or more characteristics include at least one of lighting of the real-world environment or temperature of the real-world environment (see Gorur Sheshagiri paragraph 0131, 0139 and Arora paragraph 0006);
while presenting the computer-generated environment, detecting, using one or more input devices, a first characteristic of the one or more characteristics of the real-world environment (see Lampen Abstract ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features; paragraph 4.2 page 101, lines 5-7 where Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. And page 101 lines 16-19 where pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)).; page 99, lines 1-10 where Context Acquisition. Considering the stated architecture of the context acquisition, two different types of sensors are exemplary connected within the context-aware augmented human assistance, namely the Microsoft Kinect 2 and the Microsoft HoloLens. Therefore, on the one hand, area observations enabled by the depth sensor data of the different Kinects [23] and on the other hand, positional and rotational tracking of the user by the inertial measurement unit data of the HoloLens are available. More precise, the related C# and Unity CIUs supply integrated context of the global position and rotation of the user within the Unity scene of the context model and furthermore, integrated context of changes in specified point of interests (POIs) within the context-model; and page 99, lines 18-21 where Similarly, for the position and rotation of the user a context parameter was added to the HoloLens sensor entity, as well as a context parameter of the related view frustum with the frustum quantities of the HoloLens); and
causing the virtual agent to perform a first action in accordance with the first characteristic of the real-world environment and in accordance with the affinity for the one or more characteristics including a first affinity for the first characteristic (see Lampen figures 4-5, page 100, lines 3-7 where As an output of the states the human simulation data, comprising the updated pose of the avatar and the manipulated digital components, are further incorporated into the scene model and send to the subscribed HoloLens. By utilizing the functionality of context invokers, context-aware features can be realized. Page 101, lines 16-19 pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). Page 102, lines 1-3 Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters.).
Consider claim 8, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the operations further include: modifying an appearance of a representation of the real-world environment in response to the first action in accordance with the first characteristic (see Lampen figure 5a-5c page 99, lines 33-37 where scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones and page 101, line 1-23 where Progress Control. The conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. A basic chapter functionality is enabled by the evaluation of the distance between the avatar’s end position after locomotion tasks and the user’s position. For manual tasks with an environment interaction, the provided final context of the observations of the POIs triggers the subsequent movement of the augmented human. Therefore, a more sophisticated chapter functionality besides the basic implementation is integrated. Attention Control. An attention control feature is added, with regard to the limited field of view (FOV) of the AR device. To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked where for example a virtual agent may point to minimize searching movements when FOV of the HM is not superimposed with a POI ).
Consider claim 9, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the operations further include: detecting a change in the first characteristic; and in response to detecting the change in the first characteristic, causing the virtual agent to perform a second action different from the first action in accordance with the first characteristic (see Lampen figures 5a-5c page 99, lines 33-37 where scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones and page 101, line 1-23 where Progress Control. The conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. A basic chapter functionality is enabled by the evaluation of the distance between the avatar’s end position after locomotion tasks and the user’s position. For manual tasks with an environment interaction, the provided final context of the observations of the POIs triggers the subsequent movement of the augmented human. Therefore, a more sophisticated chapter functionality besides the basic implementation is integrated. Attention Control. An attention control feature is added, with regard to the limited field of view (FOV) of the AR device. To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked. Where for example if a user successfully completes a task a thumbs up may provide the user with feedback)
Consider claim 10, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 9 and further teaches wherein the operations further include: in response to detecting the change in the first characteristic: in accordance with a determination that the change in the first characteristic is temporary, continuing performing the first action in accordance with the first characteristic without performing the second action (see figures 5a-5c page 99, lines 33-37 where scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones and page 101, line 1-23 where Progress Control. The conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. A basic chapter functionality is enabled by the evaluation of the distance between the avatar’s end position after locomotion tasks and the user’s position. For manual tasks with an environment interaction, the provided final context of the observations of the POIs triggers the subsequent movement of the augmented human. Therefore, a more sophisticated chapter functionality besides the basic implementation is integrated. Attention Control. An attention control feature is added, with regard to the limited field of view (FOV) of the AR device. To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked. Where for example if a user looks in a wrong direction, the virtual agent provides pointing movements).
Consider claim 11, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the first action includes using a virtual object or taking a virtual action in accordance with the first affinity for the first characteristic to remedy an aversion to the first characteristic (see Lampen page 99, lines 33-37 where scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones and page 101, line 1-23 where Progress Control. The conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. A basic chapter functionality is enabled by the evaluation of the distance between the avatar’s end position after locomotion tasks and the user’s position. For manual tasks with an environment interaction, the provided final context of the observations of the POIs triggers the subsequent movement of the augmented human. Therefore, a more sophisticated chapter functionality besides the basic implementation is integrated. Attention Control. An attention control feature is added, with regard to the limited field of view (FOV) of the AR device. To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)). During manual tasks with an environment interaction, attention guidance is enabled, if a manipulation of the environment at a wrong POI is detected by the Kinects (see Fig. 5(b)). Therefore, if a lack of orientation is recognized based on the aforementioned conditions, the attention control feature is invoked. Where for example virtual agent waits so as to avoid loss of information or missing important actions).
Consider claim 12, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the operations further include: presenting, using the display, the computer-generated environment including a second virtual agent; and causing the second virtual agent to perform a second action in accordance with the first characteristic, different from the first action (see Lampen figure 4 and page 100 where during virtual agent instructions If the avatar is in the personal space of the user (≤1.20 m), only the avatar’s arms are visualized, whereas within the intimate space (≤0.46 m), only the hands are displayed where a second virtual agent may correspond to just arms or hands; and page 101, line 27-page 102, line 3 where manual tasks with its subdivided chapters is important. As a result of the environment manipulation information derived by Kinects and the knowledge of the predefined task sequence the task completion condition can be evaluated. Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters. Where a second action may correspond to a subsequent task in the sequence) .
Consider claim 13, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the first action comprises movement in computer-generated environment, and wherein an amount of the movement, speed of the movement, or trajectory of the movement is in accordance with the first characteristic and of the first affinity for the first characteristic (see Lampen page 101 where conformance between the speed of the assistance and the individual character of the user performing a manual task, considering mainly the speed, has an important impact on the assistance’s outcome [13]. If the user is spatially or temporally far behind the augmented human, the possibility exists that the user misses important actions. Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists).
Consider claim 14, Lampen as modified by Gorur Sheshagiri and Arora teaches all the limitations of claim 7 and further teaches wherein the operations further include: capturing portions of the real-world environment using the one or more input devices, and wherein presenting, using the display, the computer-generated environment including the real-world environment and the virtual agent with the affinity for one or more characteristics of the real-world environment includes presenting the captured portions of the real-world environment (see Lampen page 93, line 27-33 where more complex approaches, integrating algorithms and sensors for the utilization of additional context information and the associated adaption of the presented information. Within different work [5,23] system designs are presented, integrating motion recognition techniques to track body movements and thus, enabling implicit interaction with the systems by utilizing data from camera sensors. Thereby, context information with regard to the current task are concluded and an adaptive information presentation is ensured.).
Consider claim 15, Lampen as modified by Gorur Sheshagiri and Arora teaches a non-transitory computer readable storage medium storing one or more programs (Examiner takes Official Notice that software programming/execution requires computer readable storage medium to store and execute), the one or more programs comprising instructions, which when executed by one or more processors of an electronic device (see Lampen page 94, line 10-12 where general encapsulated software architecture for the acquisition and management of context through the interpretation of sensor data is mandatory. Examiner takes Official Notice that software programming/execution requires computer readable storage medium to store and execute.), cause the electronic device to perform operations including:
presenting, using a display, a computer-generated environment including a real-world environment and a virtual agent with an affinity for one or more characteristics of the real-world environment (see Lampen figure 4-5 and page 99, lines 31-37 where Furthermore, to display an augmented human on the HoloLens an Universal Windows Platform application was created. The scene contains the same digital human and digital component objects as the assistance model. Using initially Vuforia 8.3.8 the position and rotation of an image target, are placed correctly in the real world by updating the AR mapping entity of the context model and therefore, the digital components overlap the real ones; page 100 paragraph 4.2 Visibility Control. A controllable parameter that can enhance the effect of the assistance, is the visibility of the task performed by the augmented human … If the avatar is in the personal space of the user (≤1.20 m), only the avatar’s arms are visualized, whereas within the intimate space (≤0.46 m), only the hands are displayed; page 98, paragraph 4 where based on the underlying related work, we want to enhance the assistance approach presented by [24] due to the incorporation of knowledge derived by various sensors as well as the consideration of social interaction guidelines for the development of additional context-aware features. Therefore, the gap between context-awareness and process-oriented assistances for manual assembly use-cases is addressed; page 95, lines 23-26 where The presented approach consists of basic classes, here: context entities, characteristics, here: context parameters, and relations between individuals, here: context relations (see Fig. 2). A typical context entity could be the user itself, a sensor or a component needed for a given task.; page 101 lines 5-7 Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists.; page 101 lines 16-19 where To minimize searching movements, occurring because the FOV of the HMD might not be superimposed with the POI, pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity; page 102 line 1-3 where Via a thumbs-up gesture of the augmented human (see Fig. 5(c)) the user is aware of the completion of particular chapters);
while presenting the computer-generated environment, detecting, using one or more input devices, a first characteristic of the one or more characteristics of the real-world environment (see Lampen Abstract ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features; paragraph 4.2 page 101, lines 5-7 where Therefore, the context-aware feature of progress control is implemented, whereby the augmented human waits if the eventuality of information loss exists. And page 101 lines 16-19 where pointing movements of the augmented human are implemented. For one thing, the attention guidance is triggered during walking tasks, when the user searches the augmented human and thus looks at a wrong direction, inferred by the final context of the HoloLens sensor entity (see Fig. 5(a)).; page 99, lines 1-10 where Context Acquisition. Considering the stated architecture of the context acquisition, two different types of sensors are exemplary connected within the context-aware augmented human assistance, namely the Microsoft Kinect 2 and the Microsoft HoloLens. Therefore, on the one hand, area observations enabled by the depth sensor data of the different Kinects [23] and on the other hand, positional and rotational tracking of the user by the inertial measurement unit data of the HoloLens are available. More precise, the related C# and Unity CIUs supply integrated context of the global position and rotation of the user within the Unity scene of the context model and furthermore, integrated context of changes in specified point of interests (POIs) within the context-model; and page 99, lines 18-21 where