DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in REPUBLIC OF KOREA on 03/30/2023. It is noted, however, that applicant has not filed a certified copy of the KR10-2023-0041614 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 6-9 ,12, 14-16, is/are rejected under 35 U.S.C. 102(a)(1)(2) as being anticipated by Nel et al (US 11545046 B2), hereinafter Nel.
Regarding claim 1, Nel teaches an apparatus for analyzing efficiency of virtual task performance of a user interacting with eXtended Reality (XR) (“the system will evaluate the user success of the routine and log that performance data” – Col 15, Lines 18-19. [NOTE: Nel further discloses that virtual training tasks are provided by VR/AR devices which are subsets of XR: “A computer-implemented method of providing virtual reality (VR) or Augmented Reality (AR) training” – Abstract.]) comprising: memory in which at least one program is recorded (“a memory, and computer software program code, stored on the memory” – Col 4 Lines 37-38); and a processor for executing the program (“executed by the processor” – Col 4, Line 38), wherein the program performs generating user interaction feature information from sensor information of a virtual reality (VR) device (“Some of the sensors may, for example, be built into the AR/VR headset, as eye-tracking sensors, microphones, etc.” - Col 3, Lines 52-57. [NOTE: the biometric data is gathered from user interaction with virtual training.]), calculating quality of experience of the user as values of multiple experience indices based on the feature information by applying a machine-learning model (“a machine learning model is trained to classify biometric signals into signals indicative of cognitive mental state metrics relevant to learning”. – Col 9, Lines 15-17 [NOTE: cognitive mental state metrics refers to attention, concentration, and anxiety which are examples of experience indices]), and evaluating effectiveness of a VR experience of the user based on a result of mapping the values of the multiple experience indices to previously generated metrics (“In some embodiments, the database also stores, or aggregates, information on an individual user's previous use of the system” – Col 5, Lines 6-8 [NOTE: a user’s previous use of the system is implied to store information such as the values of the cognitive mental state metrics. When the user performs the training again, the new cognitive mental state metric values calculated from obtaining biometric information updates the previous set of metrics in order to determine if improvements to the virtual training is needed, Col 21, Lines 52-57].
Regarding claim 9, the claim describes a method that performs the same function of claim 1. Therefore, method claim 9 corresponds to the apparatus disclosed in claim 1 and is rejected for the same reasons of anticipation as used above.
Regarding claim 4, Nel teaches the apparatus of claim 1. Nel further teaches wherein the experience indices include at least one of a degree of concentration, a degree of fatigue, a degree of interest, or a degree of arousal, or a combination thereof (“FIG. 3 illustrates an example in which the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level” – Col 6, Lines 13-16 [NOTE: Nel describes that the biological sensor readings collects user input data which is processed by a machine learning algorithm that calculates a quantitative value to indicate an individual’s level of attention/concentration, Col 9])
Regarding claim 12, the claim describes a method that performs the same function of claim 4. Therefore, method claim 12 corresponds to the apparatus disclosed in claim 4 and is rejected for the same reasons of anticipation as used above.
Regarding claim 6, Nel teaches the apparatus of claim 1. Nel further teaches wherein, when evaluating the effectiveness, the program generates the metrics based on an interrelationship between the experience indices and learning cognition attributes of the user (“Associations are determined between the biometric data and psychological/neurological factors related to learning, such as a cognitive load, attention, anxiety, and motivation” – Abstract. [NOTE: From the relationship between the experience indices and learning cognition abilities, the machine learning model will determine what part of the training needs to be adjusted (such as difficulty). As disclosed by Nel, if the machine learning model evaluates given the user interaction information that the user’s performance suffers due to high levels of cognitive load an adjustment is applied to the training to lower the complexity of the training, Col 5, Lines 38-44]).
Regarding claim 14, the claim describes a method that performs the same function of claim 6. Therefore, method claim 14 corresponds to the apparatus disclosed in claim 6 and is rejected for the same reasons of anticipation as used above.
Regarding claim 7, Nel teaches the apparatus of claim 1. Nel further teaches wherein the program further performs deriving at least one treatment based on a result of evaluation of the effectiveness of the VR experience (“a user may currently be in a peak-learning mode but a rise in one or more of the cognitive mental state metrics may have trends that suggest that the user's performance will degrade in the near future. In this situation, reducing the complexity of the education session at a point in time before the peak-learning mode ends may be a useful strategy.” - Col 5, Lines 38-44. [NOTE: the machine learning algorithm evaluates that the user is struggling with the training and will adjust the complexity as treatment.]).
Regarding claim 15, the claim describes a method that performs the same function of claim 7. Therefore, method claim 15 corresponds to the apparatus disclosed in claim 7 and is rejected for the same reasons of anticipation as used above.
Regarding claim 8, Nel teaches the apparatus of claim 1. Nel further teaches wherein the VR device includes at least one of XR glasses, an eye-tracking device, or a haptic glove, or a combination thereof (Some of the sensors may, for example, be built into the AR/VR headset, as eye-tracking sensors, microphones, etc.” – Col 3, Lines 55-57. [NOTE: Nel specifically discloses a VR/AR headset with eye-tracking capabilities.]), provides virtual education and training simulation services based on virtual reality (“A computer-implemented method of providing virtual reality (VR) or Augmented Reality (AR) training” – Abstract. [NOTE: Nel further discloses that the training could be an “educational training session” – Col 1, Line 41,]), and includes a sensor for acquiring multimodal interaction information of at least one of a motion of the user, eye gaze of the user, or a sense of touch of the user, or a combination thereof (“Other examples of biometric data include eye tracking data, heart rate measurements, respiration, motion tracking, voice analysis, posture analysis, facial analysis, and galvanic skin response.” – Col 3, Lines 49-52).
Regarding claim 16, the claim describes a method that performs the same function of claim 8. Therefore, method claim 16 corresponds to the apparatus disclosed in claim 8 and is rejected for the same reasons of anticipation as used above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 10, 17, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nel and Orr et al (US 11523773 B2), hereinafter Orr.
Regarding claim 2, Nel teaches the apparatus of claim 1. Nel does not teach wherein, when generating the user interaction feature information, the program constructs a database by generating the interaction feature information based on spatial and time-series data. However, Orr teaches wherein, when generating the user interaction feature information, the program constructs a database (“healthcare record server comprises a database for storing electronic health records” – Claim 4. [NOTE: Examiner has interpreted that a database can refer to any organized data stored in some form of memory. Healthcare records, as disclosed by Orr, includes the biometric data gathered from the user performing the virtual task and is stored within the system memory for further processing.]) by generating the interaction feature information based on spatial (“In various embodiments, motion data is collected for a user while the user performs a training protocol in a virtual environment” – Abstract. [NOTE: Examiner has interpreted that “spatial data” refers to interaction feature information obtained through the user’s motion]) [NOTE: After the combination, the collection of spatial data as taught by Orr can be obtained with “time-series” data which is taught by Nel: “This model is iterated over time in a continuous or non-continuous fashion” – Col 12, Lines 21-24 (NOTE: the model must continuously take input of user interaction information for it to continue evaluating as described). Therefore, this combination teaches wherein, when generating the user interaction feature information, the program constructs a database by generating the interaction feature information based on spatial and time-series data]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Nel to incorporate the teachings of Orr to collect spatial and time-series data which then constructs a database containing the user’s interaction feature information. Continuously collecting interaction information for of the user during the virtual task would better reflect the user’s experience which will provide a more accurate calculation for effectiveness. Obtaining spatial interaction information can give insight on physiological behaviors that can help with determining the experience indices. Storing the interaction data in a database will then allow the user information to be referenced again for calculating effectiveness.
Regarding claim 10, the claim describes a method that performs the same function of claim 2. Therefore, method claim 10 corresponds to the apparatus disclosed in claim 2 and is rejected for the same reasons of anticipation as used above.
Regarding claim 17, Nel teaches an apparatus for analyzing efficiency of virtual task performance of a user interacting with eXtended Reality (XR) (“A computer-implemented method of providing virtual reality (VR) or Augmented Reality (AR) training” – Abstract), comprising: memory in which at least one program is recorded (“a memory, and computer software program code, stored on the memory” – Col 4 Lines 37-38); and a processor for executing the program (“executed by the processor” – Col 4, Line 38), wherein the program performs generating user interaction feature information from sensor information of a virtual reality (VR) device (“Some of the sensors may, for example, be built into the AR/VR headset, as eye-tracking sensors, microphones, etc.” - Col 3, Lines 52-57) calculating quality of experience of the user as values of multiple experience indices based on the feature information by applying a machine-learning model (“a machine learning model is trained to classify biometric signals into signals indicative of cognitive mental state metrics relevant to learning”. – Col 9, Lines 15-17), generating metrics based on an interrelationship between the experience indices and learning cognition attributes of the user (“Associations are determined between the biometric data and psychological/neurological factors related to learning, such as a cognitive load, attention, anxiety, and motivation” – Abstract), mapping the values of the multiple experience indices to the metrics (a trained machine learning model to examine a current set of cognitive mental state metrics. [NOTE: the biometric sensors that collect user interaction information are used to determine the user’s state during the experience such as focus, attention, and interest.]), evaluating an experience based on the metrics (“determined adjustments to the training session to maintain learning efficacy.” – Col 5, Lines 20-21 [NOTE: the user’s experience is evaluated based on calculations from the user’s interaction information as input to determine efficiency]), and deriving at least one treatment based on a result of evaluating effectiveness of virtual reality (“a user may currently be in a peak-learning mode but a rise in one or more of the cognitive mental state metrics may have trends that suggest that the user's performance will degrade in the near future. In this situation, reducing the complexity of the education session at a point in time before the peak-learning mode ends may be a useful strategy.” - Col 5, Lines 38-44). [NOTE: due to similar functional language, please refer to the notes written for the rejection of claim 1] Nel does not teach constructing a feature information database by generating the interaction feature information based on spatial and time-series data. However, Orr teaches constructing a feature information database by generating the interaction feature information based on spatial data (“In various embodiments, motion data is collected for a user while the user performs a training protocol in a virtual environment” – Abstract.). [NOTE: After the combination, the collection of spatial data as taught by Orr can be obtained with “time-series” data which is taught by Nel: “This model is iterated over time in a continuous or non-continuous fashion” – Col 12, Lines 21-24. Therefore, this combination teaches wherein, when generating the user interaction feature information, the program constructs a database by generating the interaction feature information based on spatial and time-series data. Due to similar functional language, please refer to the notes written for the rejection of claim 2.]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Nel to incorporate the teachings of Orr to collect spatial and time-series data which then constructs a database containing the user’s interaction feature information. Continuously collecting interaction information for of the user during the virtual task would better reflect the user’s experience which will provide a more accurate calculation for effectiveness. Obtaining spatial interaction information can give insight on physiological behaviors that can help with determining the experience indices. Storing the interaction data in a database will then allow the user information to be referenced again for calculating effectiveness.
Regarding claim 19, Nel further teaches wherein the experience indices include at least one of a degree of concentration, a degree of fatigue, a degree of interest, or a degree of arousal, or a combination thereof (“FIG. 3 illustrates an example in which the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level” – Col 6, Lines 13-16 [NOTE: Nel describes that the biological sensor readings collects user input data which is processed by a machine learning algorithm that calculates a quantitative value to indicate an individual’s level of attention/concentration, Col 9])
Claim(s) 3, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nel, Orr and Sundstrom et al (US 12524977 B2), hereinafter Sundstrom.
Regarding claim 3, Nel in view of Orr teaches the apparatus of claim 2. Nel does not teach wherein multiple interaction modalities include a motion, eye gaze, and a sense of touch. However, Sundstrom teaches wherein multiple interaction modalities include a motion (“the hand tracking unit 244 is configured to track the position/location of one or more portions of the user's hands, and/or motions” - Col 12, Lines 31-33), eye gaze (“the eye tracking unit 243 is configured to track the position and movement of the user's gaze” – Col 12 Lines 8-40), and a sense of touch (“A person may sense and/or interact with a XR object using any one of their senses, including sight, sound, touch, taste, and smell” – Col 6 Lines 11-13). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Nel to incorporate the teachings of Sundstrom to include multiple modes of collecting user interaction data such as motion, eye gaze, and a sense of touch. Collecting user interaction information via these methods will give the machine learning model a stronger representation of physiological/behavioral elements that contribute to the experience indices.
Regarding claim 11, the claim describes a method that performs the same function of claim 3. Therefore, method claim 11 corresponds to the apparatus disclosed in claim 3 and is rejected for the same reasons of anticipation as used above.
Regarding claim 18, the claim describes an apparatus that performs the same function of claim 3. Therefore, apparatus claim 18 corresponds to the apparatus disclosed in claim 3 and is rejected for the same reasons of anticipation as used above.
Claim(s) 5, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nel and Sun et al (US 20190244427 A1), hereinafter Sun.
Regarding claim 5, Nel teaches the apparatus of claim 1. Nel does not teach wherein the experience indices and the metrics are generated based on domain knowledge of a given task. However, Sun teaches wherein the experience indices and the metrics are generated based on domain knowledge of a given task (“monitoring aspects of the user's executing or performing a task in the particular domain and detect a user task efficiency.” – Par 43, Lines 3-5). [NOTE: After the combination, the generated experience indices and metric calculated from the interaction information as taught by Nel could monitor that information with consideration of the criteria highlighted by domain knowledge of a given virtual task. This combination would then teach wherein the experience indices and the metrics are generated based on domain knowledge of a given task]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Nel to incorporate the teachings of Sun to generate the experience indices and metrics based on domain knowledge of a given task. It is a common practice to consider the domain knowledge of a given task as the criteria to determine the user’s performance. Doing so would allow the calculation of experience indices and effectiveness to accurately depict the user’s experience.
Regarding claim 13, the claim describes a method that performs the same function of claim 5. Therefore, method claim 13 corresponds to the apparatus disclosed in claim 5 and is rejected for the same reasons of anticipation as used above.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nel, Orr and Sun et al (US 20190244427 A1), hereinafter Sun.
Regarding claim 20: Nel does not teach wherein the experience indices and the metrics are generated based on domain knowledge of a given task. However, Sun teaches wherein the experience indices and the metrics are generated based on domain knowledge of a given task (“monitoring aspects of the user's executing or performing a task in the particular domain and detect a user task efficiency.” – Par 43, Lines 3-5). [NOTE: After the combination, the generated experience indices and metric calculated from the interaction information as taught by Nel could monitor that information with consideration of the criteria highlighted by domain knowledge of a given virtual task. This combination would then teach wherein the experience indices and the metrics are generated based on domain knowledge of a given task]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Nel to incorporate the teachings of Sun to generate the experience indices and metrics based on domain knowledge of a given task. It is a common practice to consider the domain knowledge of a given task as the criteria to determine the user’s performance. Doing so would allow the calculation of experience indices and effectiveness to accurately depict the user’s experience.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure. Aimone et al (US 20160077547 A1) teaches a wearable device with a bio-signal sensor and display to provide an interactive VR environment for a user. The user interactions are evaluated and scored followed by feedback to the user for improving or adjustments to the VR environment. Pike et al (US 20190392728 A1) teaches a VR training system that monitor’s a user’s performance during a task. An evaluation criterion is defined which provides the basis for evaluation of the user.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID V. NGUYEN whose telephone number is (571)-272-6111. The examiner can normally be reached M-F 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID VAN NGUYEN/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617