Prosecution Insights
Last updated: April 19, 2026
Application No. 17/821,554

INJECTING EMOTIONAL MODIFIER CONTENT IN MIXED REALITY SCENARIOS TO MIMIC REAL LIFE CONDITIONS

Non-Final OA §102§103§112
Filed
Aug 23, 2022
Examiner
PEREN, VINCENT ROBERT
Art Unit
2617
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
266 granted / 382 resolved
+7.6% vs TC avg
Strong +20% interview lift
Without
With
+20.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
26.0%
-14.0% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 382 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Obligation Under 37 CFR 1.56 – Joint Inventors This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 23, 2025 has been entered. Response to Amendment Applicant’s amendment filed on October 23, 2025 has been entered. Claims 1, 8 and 15 have been amended. Claims 5, 12 and 19 were canceled previously. No claims have been added. Thus, claims 1-4, 6-11, 13-18 and 20 are still pending in this application, with claims 1, 8 and 15 being independent. Claim Objections Claim 6 is objected to because of the following informalities: claim 6 (depends on claim 1) recites, “wherein the determining occurs prior to instantiating the mixed-reality experience” (emphasis added). However, claim 1 now recites “determining that the mental state” (line 10 of claim 1) and “determining that one of the emotions” (line 12 of claim 1), and, as such, it is unclear which instance of “determining” in claim 1 is being referenced by “the determining” in claim 6. Appropriate correction is required. Claim 13 is objected to because of the following informalities: claim 13 (depends on claim 8) recites, “wherein the determining occurs prior to instantiating the mixed-reality experience” (emphasis added). However, claim 8 now recites “determining that the mental state” (line 15 of claim 8) and “determining that one of the emotions” (line 17 of claim 8), and, as such, it is unclear which instance of “determining” in claim 8 is being referenced by “the determining” in claim 13. Appropriate correction is required. Claim 20 is objected to because of the following informalities: claim 20 (depends on claim 15) recites, “wherein the determining occurs prior to instantiating the mixed-reality experience” (emphasis added). However, claim 15 now recites “determining that the mental state” (line 13 of claim 15) and “determining that one of the emotions” (line 15 of claim 15), and, as such, it is unclear which instance of “determining” in claim 15 is being referenced by “the determining” in claim 20. Appropriate correction is required. Claim Rejections - 35 USC § 112 (d) The following is a quotation of 35 U.S.C. 112 (d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 7 is rejected under 35 U.S.C. 112 (d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Independent claim 1, upon which claim 7 indirectly depends, has been amended to recite the same limitations as dependent claim 7, verbatim. Thus, claim 7 fails to further limit independent claim 1 and/or intervening dependent claim 6. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4, 7, 8-9, 11 and 15-16 and 18 are rejected under 35 U.S.C. 102 (a)(1) and/or 102 (a)(2) as being anticipated by NEL et al. (US 2020/0082735, hereinafter “NEL”). Regarding claim 8, NEL discloses a computer system (¶ [0279]: “A computing system”) for mixed-reality emotional modification (¶ [0004]: “systems and methods for using biometric data as feedback to adapt a training session conducted by virtual reality or augmented reality.” ¶ [0040]: “FIG. 1 is a diagram illustrating an example of a system to provide neuroadaptive virtual reality (VR) or augmented reality (AR) training.“), the computer system comprising: one or more processors, one or more computer readable memories (¶ [0047[: “server 115 may include computer processors, a memory, internal communication buses, and external communication interfaces. In some embodiments, the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation.” ¶ [0279]: “A computing system or data processing system suitable for storing and/or executing program code will include at least one processor (e.g., a hardware processor) coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.” ¶ [0052]: “the controller 114 may include a processor, memory, and computer program instructions.”), one or more mixed-reality devices (¶ [0040]: “a VR or AR head display 105, such as a VR headset or an AR headset.”), one or more sensors (¶ [0042]: “the biometric data includes a measurement of the user's brainwaves using an electroencephalogram (EEG) headset. Other examples of biometric data include eye tracking data, heart rate measurements, respiration, motion tracking, voice analysis, posture analysis, facial analysis, and galvanic skin response. As illustrated in FIG. 1, the environment about the user may include one or more sensors to collect biometric data, such as one or more camera, microphones, heart monitors, etc. Some of the sensors may, for example, be built into the AR/VR headset, as eye-tracking sensors, microphones, etc. Other biometric sensors may be worn on a user. Still other sensor may be disposed in the general environment about the user, such as additional cameras or microphones. Individual sensors may, for example, transmit sensor data via a wired channel, wireless channel, network interface, etc.”), one or more computer readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories (¶ [0047[: “server 115 may include computer processors, a memory, internal communication buses, and external communication interfaces. In some embodiments, the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation.” ¶ [0052]: “the controller 114 may include a processor, memory, and computer program instructions.” ¶ [0279]: “A computing system or data processing system suitable for storing and/or executing program code will include at least one processor (e.g., a hardware processor) coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.”), wherein the computer system is capable of performing a method comprising: collecting biometric information on a user (e.g., ¶ [0041]: “A set of biometric data is collected from the user to determine cognitive state metrics for the user related to learning efficacy.” ¶ [0042]: “the biometric data includes a measurement of the user's brainwaves using an electroencephalogram (EEG) headset. Other examples of biometric data include eye tracking data, heart rate measurements, respiration, motion tracking, voice analysis, posture analysis, facial analysis, and galvanic skin response.” ) during a mixed reality session (e.g., ¶ [0047]: “a server 115 serves the content for the AR/VR training session. The server 115 may include computer processors, a memory, internal communication buses, and external communication interfaces. In some embodiments, the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation. In some embodiments, the database also stores, or aggregates, information on an individual user's previous use of the system and for uses by other participants. This historical data may, for example, be used to generate training data, as described below in more detail.”); identifying a mental state of the user (e.g., ¶ [0041]: “A set of biometric data is collected from the user to determine cognitive state metrics for the user related to learning efficacy.” ¶ [0041]: “Some examples of internal factors include the mood of the individual, state of mind, energy level, or health of the individual.”) during the mixed-reality session (e.g., ¶ [0047]: “the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data.” ¶ [0047]: “during a learning session, the VR learning program could access different pieces of content as a form of adaptation.” ¶ [0059]: “FIG. 4 is a flowchart of a general method in accordance with an embodiment. In block 405, the user signs in. In block 410 the biometric data of the user is monitored.” ¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback. Thus, the cognitive mental state metrics are maintained within a range conducive to learning.”) by comparing (e.g., ¶ [0043]: “analyzed with respect to”) the biometric information (e.g., ¶ [0043]: “the raw biometric data”) against a default mental state (e.g., ¶ [0043]: “baseline data for the user.”) (¶ [0043]: “the raw biometric data typically doesn't directly provide feedback on learning efficacy. Further processing of each source of biometric data is desirable to convert the biometric data into a signal having attributes that are correlated or associated with a learning metric. As one example, the raw biometric data is ideally analyzed with respect to baseline data for the user.”) (¶ [0047]: “the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation. In some embodiments, the database also stores, or aggregates, information on an individual user's previous use of the system and for uses by other participants. This historical data may, for example, be used to generate training data, as described below in more detail.”¶ [0059]: “FIG. 4 is a flowchart of a general method in accordance with an embodiment. In block 405, the user signs in. In block 410 the biometric data of the user is monitored. In block 415, a training phase is initiated. One aspect of the training phase is that baseline data can be obtained to understand the user's response. This may include, for example, performing one or more surveys or tests to assess a user's biometric responses. For example, the tests may provide general psychometric data regarding how the user responds to different situations. Additionally, the tests may be selected to be similar to aspects of the training. Individual testing is useful because of the variations in individual behavior. For example, one individual may have high levels of background anxiety and stress in their life. Another individual may have low levels of background anxiety and stress in their life. There are a variety of other reasons why individuals may respond differently to a learning environment, including demographic factors like age, level of education, previous experience with VR training, etc.”), wherein the mental state comprises a plurality of emotions (e.g., ¶ [0051]: “an arbitrary number of different cognitive mental state metrics”; ¶ [0051]: “FIG. 3 illustrates an example in which the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level.” ¶ [0044]: “As an example, raw EEG data may be processed to generate a metric corresponding to a cognitive load, indicative of how “hard” the user is thinking based on the EEG data. However, a completer and more accurate picture of the overall cognitive mental state of the user is generated by including two or more different cognitive mental state metrics. One or more of these many be generated from other sources of non-EEG biometric data. For example, heart rate, respiration, voice analysis, and galvanic skin response may be useful to generate cognitive mental state metrics associated with anxiety.”); during the mixed-reality session (e.g., ¶ [0047]: “during a learning session,” ¶ [0063]: “during the educational session” ¶ [0114]: “while they execute the learning routine”), dynamically comparing the plurality of emotions comprising the mental state of the user (e.g., ¶ [0060]: “response to biometric data feedback.” ¶ [0062]: “the biometric data of the user is monitored during the education phase to generate feedback in the form of the cognitive state metrics.” ¶ [0114]: “Data is collected from the user while they execute the learning routine, and analyzed to determine attributes such as cognitive load or emotional engagement. Depending on how the user data performs relative to preset boundary conditions,”) against a plurality of emotions comprising an expected baseline (e.g., ¶ [0060]: “within a range conducive to learning.” ¶ [0063]: “decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds.” ¶ [0114]: “Data is collected from the user while they execute the learning routine, and analyzed to determine attributes such as cognitive load or emotional engagement. Depending on how the user data performs relative to preset boundary conditions, the lesson material may be modified in some way to improve performance,”) associated with a mixed-reality experience (e.g., ¶ [0060]: “educational simulation” ¶ [0114]: “the learning routine,” ¶ [0114]: “the lesson material”) (¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback. Thus, the cognitive mental state metrics are maintained within a range conducive to learning.” ¶ [0061]: “FIG. 5 is a flow chart of a general method of using thresholds in the cognitive state metric(s) to make decisions to adapt the training. In block 505, a training phase is initiated to calibrate thresholds of cognitive mental state metrics. The training phase may, for example, identify a threshold anxiety level predictive of a desirability to either reduce or increase the complexity level or the pacing of the training. For example, if the anxiety level is below a first threshold, it may be predictive that the complexity of the training can be increased, Conversely, if the anxiety level is above a second threshold, it may be predictive that the complexity of the training should be reduced to maintain an effective learning experience for the user.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated. (It would be understood that in an alternate embodiment, the education phase may use augmented reality). In block 515, the biometric data of the user is monitored during the education phase to generate feedback in the form of the cognitive state metrics. This monitoring may be performed on a periodic basis at a rate that is fast compared with human behavioral/cognitive responses to educational training, e.g., once per second.” ¶ [0063]: “The biometric data is used to provide feedback during the educational session to adapt the training. As previously discussed, this may include adaptation such as selecting less complex training session modules, implementing relaxation breaks, changing in pacing or sequence, etc. In decision block 520, the process ends when the training is completed. Otherwise, decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds. If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase. Otherwise in block 530, the education phase is continued without adapting it.”); responsive to determining that the mental state (¶ [0041]: “determine cognitive state metrics for the user related to learning efficacy.” ¶ [0048]: “analyze the cognitive mental state metrics and determine when the training should be adapted.”) falls within a threshold value of the expected baseline (¶ [0048]: “maintain the education session within an educational training zone having metrics compatible with effective learning.” ¶ [0048]: “maintain the cognitive mental state metrics within a desired range associated with healthy learning. There are many different ways this can be done. As one example a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds.” ¶ [0060]: “within a range conducive to learning.” ¶ [0063]: “decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds.”), initializing the mixed-reality experience (e.g., ¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated.”) (¶ [0059]: “FIG. 4 is a flowchart of a general method in accordance with an embodiment. In block 405, the user signs in. In block 410 the biometric data of the user is monitored. In block 415, a training phase is initiated. One aspect of the training phase is that baseline data can be obtained to understand the user's response. This may include, for example, performing one or more surveys or tests to assess a user's biometric responses. For example, the tests may provide general psychometric data regarding how the user responds to different situations. Additionally, the tests may be selected to be similar to aspects of the training. Individual testing is useful because of the variations in individual behavior. For example, one individual may have high levels of background anxiety and stress in their life. Another individual may have low levels of background anxiety and stress in their life. There are a variety of other reasons why individuals may respond differently to a learning environment, including demographic factors like age, level of education, previous experience with VR training, etc.” ¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback. Thus, the cognitive mental state metrics are maintained within a range conducive to learning.” NOTE: In other words, not only is the adaptive educational simulation continuously adapted during training based on the determined mental state of the user so as to maintain the expected baseline of a mental state that is conducive to learning, a version of the educational simulation is generated initially in response to the initial mental state of the user in order to obtain an initial educational simulation corresponding to the expected baseline of a mental state for effective learning. ¶ [0061]: “FIG. 5 is a flow chart of a general method of using thresholds in the cognitive state metric(s) to make decisions to adapt the training. In block 505, a training phase is initiated to calibrate thresholds of cognitive mental state metrics. The training phase may, for example, identify a threshold anxiety level predictive of a desirability to either reduce or increase the complexity level or the pacing of the training. For example, if the anxiety level is below a first threshold, it may be predictive that the complexity of the training can be increased, Conversely, if the anxiety level is above a second threshold, it may be predictive that the complexity of the training should be reduced to maintain an effective learning experience for the user.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated. (It would be understood that in an alternate embodiment, the education phase may use augmented reality). In block 515, the biometric data of the user is monitored during the education phase to generate feedback in the form of the cognitive state metrics. This monitoring may be performed on a periodic basis at a rate that is fast compared with human behavioral/cognitive responses to educational training, e.g., once per second.” ¶ [0063]: “The biometric data is used to provide feedback during the educational session to adapt the training. As previously discussed, this may include adaptation such as selecting less complex training session modules, implementing relaxation breaks, changing in pacing or sequence, etc. In decision block 520, the process ends when the training is completed. Otherwise, decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds. If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase. Otherwise in block 530, the education phase is continued without adapting it.” NOTE: Again, based on an initially determined anxiety level of the user before the VR training experience is initiated, the initiated VR education phase will have an expected baseline (i.e., a mental state corresponding to effective learning) corresponding to the initially determined anxiety level of the user before the VR training experience is initiated.); responsive to determining that one of the emotions comprising the mental state deviates from a corresponding emotion comprising the expected baseline by a magnitude exceeding the threshold amount (¶ [0063]: “decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds. If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase.” ¶ [0049]: “a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds.”), selecting one or more virtual content elements associated with the deviating emotion (e.g., ¶ [0049]: “proactively adapts the training session to maintain the cognitive mental state metrics within a desired range”; ¶ [0050]: “a stress reduction technique could be inserted into the session, such as taking a break, playing a game, or doing a breathing exercise.” ¶ [0063]: “If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase.” ¶ [0063]: “this may include adaptation such as selecting less complex training session modules, implementing relaxation breaks, changing in pacing or sequence, etc.”) based on the magnitude (e.g., ¶ [0049]: “proactively adapts the training session to maintain the cognitive mental state metrics within a desired range” ) (¶ [0049]: “As an illustrative example, suppose a cognitive load metric is rising and that an anxiety level metric is also rising. To prevent the user from become over-stressed, the predictive engine 112 proactively adapts the training session to maintain the cognitive mental state metrics within a desired range associated with healthy learning. There are many different ways this can be done. As one example a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds. However, more generally, dynamic aspects, such a rate of change, can be considered. The overall pattern of changes to a set of cognitive mental state metrics can be considered. For example, a user may currently be in a peak-learning mode but a rise in one or more of the cognitive mental state metrics may have trends that suggest that the user's performance will degrade in the near future. In this situation, reducing the complexity of the education session at a point in time before the peak-learning mode ends may be a useful strategy.” ¶ [0050]: “Preventing a “spike” in anxiety/stress may be useful to maintain the overall learning efficiency over the entire learning session. The predictive engine 112 could, for example, have upper and lower threshold values selected with a margin below the absolute minimum and maximum values to provide a cushion for dealing with human response time. For example, suppose an “ideal” learning situation would be to maintain a cognitive load below 90% and an anxiety level below 70%. However, in practice, lower maximum thresholds might be chosen, such as maintaining a cognitive load below 80% and an anxiety level below 60% to reduce the possibility of spiking behavior. In any case, the predictive engine 112 may be implemented in different ways to have an algorithm that monitors the different cognitive mental state metrics and that proactively adapts the education session in response. This may be called “learning in the loop,” in the sense that the complexity of the educational content can be dynamically adapted, based on the biometric data, before the learning experience is substantially degraded. While a complexity level is one example of an adaptation, more generally other types of adaptation could also be performed. For example, a stress reduction technique could be inserted into the session, such as taking a break, playing a game, or doing a breathing exercise.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated. (It would be understood that in an alternate embodiment, the education phase may use augmented reality). In block 515, the biometric data of the user is monitored during the education phase to generate feedback in the form of the cognitive state metrics. This monitoring may be performed on a periodic basis at a rate that is fast compared with human behavioral/cognitive responses to educational training, e.g., once per second.” ¶ [0063]: “The biometric data is used to provide feedback during the educational session to adapt the training. As previously discussed, this may include adaptation such as selecting less complex training session modules, implementing relaxation breaks, changing in pacing or sequence, etc. In decision block 520, the process ends when the training is completed. Otherwise, decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds. If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase. Otherwise in block 530, the education phase is continued without adapting it.”); and modifying the mixed-reality experience with the one or more selected virtual content elements (e.g., ¶ [0043]: “The set of cognitive mental state metrics can thus be used to determine how to adapt a VR/AR training session.” ¶ [0048]: “a predictive engine 112 is included either in the biometric analyzer 110 or server 115 to analyze the cognitive mental state metrics and determine when the training should be adapted. In that sense, it functions as part of a training mode adapter to generating training mode adaption commands. As examples, the predictive engine may, for example, include rules, tables, or a trained machine learning model to examine a current set of cognitive mental state metrics and determined adjustments to the training session to maintain learning efficacy. In one embodiment, the predictive engine 112 is making predictions about the user's response to an educational session and determining training mode adaptations that may be required to maintain the education session within an educational training zone having metrics compatible with effective learning. ” ¶ [0047[: “server 115 may include computer processors, a memory, internal communication buses, and external communication interfaces. In some embodiments, the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation.” ¶ [0049]: “As an illustrative example, suppose a cognitive load metric is rising and that an anxiety level metric is also rising. To prevent the user from become over-stressed, the predictive engine 112 proactively adapts the training session to maintain the cognitive mental state metrics within a desired range associated with healthy learning. There are many different ways this can be done. As one example a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds. However, more generally, dynamic aspects, such a rate of change, can be considered. The overall pattern of changes to a set of cognitive mental state metrics can be considered. For example, a user may currently be in a peak-learning mode but a rise in one or more of the cognitive mental state metrics may have trends that suggest that the user's performance will degrade in the near future. In this situation, reducing the complexity of the education session at a point in time before the peak-learning mode ends may be a useful strategy.” ¶ [0050]: “Preventing a “spike” in anxiety/stress may be useful to maintain the overall learning efficiency over the entire learning session. The predictive engine 112 could, for example, have upper and lower threshold values selected with a margin below the absolute minimum and maximum values to provide a cushion for dealing with human response time. For example, suppose an “ideal” learning situation would be to maintain a cognitive load below 90% and an anxiety level below 70%. However, in practice, lower maximum thresholds might be chosen, such as maintaining a cognitive load below 80% and an anxiety level below 60% to reduce the possibility of spiking behavior. In any case, the predictive engine 112 may be implemented in different ways to have an algorithm that monitors the different cognitive mental state metrics and that proactively adapts the education session in response. This may be called “learning in the loop,” in the sense that the complexity of the educational content can be dynamically adapted, based on the biometric data, before the learning experience is substantially degraded. While a complexity level is one example of an adaptation, more generally other types of adaptation could also be performed. For example, a stress reduction technique could be inserted into the session, such as taking a break, playing a game, or doing a breathing exercise.” ¶ [0051]: “FIG. 2 illustrates an example of a general predictive engine 112 in an embodiment in which the predictive engine 112 is part of a module that generates training adaptation commands 111 based on an arbitrary number of different cognitive mental state metrics and FIG. 3 illustrates an example in which the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level. It is an implementation detail regarding the form of the training adaptation commands output by the predictive engine 112. For example, the training adaptation commands 111 could be a command in the form one or more numbers or codes indicating a desired mode change, such a complexity level (e.g., an integer 1, 2, or 3 for high, medium, or low complexity as one example) to indicate changing to a different complexity mode. More generally, the output could be a set of normalized numbers, which are then used by other entities in the system to select training modules that are executed. For example, the predictive engine 112 could be implemented to output a complexity command code to signal an increase or decrease in the complexity of the training. A rest break or relaxation command code or number could be output to single [sic, “signal”] the desirability of break in the training as another example. Other commands codes could also be generated to account for common training scenarios. Alternatively, the predictive engine could issue training adaptation commands in the form of direct decisions on particular training modules that are to be used in a training session.” ). Regarding claim 9 (depends on claim 8), NEL discloses: wherein the virtual content elements (e.g., ¶ [0047]: “the content for the AR/VR training session.” ¶ [0058]: “training variations”) are selected from a profile (e.g., ¶ [0047]: “database 120” and/or ¶ [0058]: “a matrix of possible training variations.”) (¶ [0047]: “a server 115 serves the content for the AR/VR training session. The server 115 may include computer processors, a memory, internal communication buses, and external communication interfaces. In some embodiments, the server 115 executes a neuroadaptive VR learning program in which variations of an educational training session (or set of sessions) are supported for a VR or AR educational training session. The variations may, for example, including varying the educational content type, content style, content complexity, pacing, or other factors of the AR/VR training. As one example a database 120 may be provided to support variations in the training given based on biometric data. For example, the database 120 could store two or more different levels of complexity for one or more training modules of a training session. More generally, the database 120 could store a matrix of different variations in training modules. Thus, in one embodiment, during a learning session, the VR learning program could access different pieces of content as a form of adaptation. In some embodiments, the database also stores, or aggregates, information on an individual user's previous use of the system and for uses by other participants. This historical data may, for example, be used to generate training data, as described below in more detail.” ¶ [0052]: “In one embodiment, the predictive model is a trained machine learning model, although more generally in may comprises table, matrices, or other features selected to aid in making predictions about how to adapt an educational training session.” ¶ [0058]: “As one example, a neuroadaptive VR learning program 117 supports a matrix of possible training variations. As one possibility, the complexity or pacing of the training could have two or more variations. However, more elaborate variations could be included to support a matrix of possibilities. For example, if a user becomes stressed in an empathy portion of a training session, a less challenging form of the training could be performed. Alternatively, the sequence of training could be altered to return to the remaining empathy training in a later portion of the training.”) based on a predicted emotional effect the virtual content element will have on the emotional state (¶ [0053]: “As previously discussed, the predictive engine 112 may be implemented in different ways. And as described below in more detail, data from a training session with the user (e.g., from a previous training session) may be used alone or in combination with data from other users as an aid to determine rules for making predictions. For example, in an enterprise training environment, a data set of a large number of participants may be used to identify relationships between the monitored cognitive mental state metrics and learning efficacy.” ¶ [0054]: “Additionally, an individual user may be given an initial training test session to obtain test data regarding their individual responses to different levels of test situations. For example, the training data (from a group of previous users) and the current user may be highly specific in terms of workforce demographics (e.g., blue collar technicians), training objective (e.g., training of empathy in a job interfacing with the public under different situations). Having test data and other data for the user and for a set of previous users provides a data set that can be used in different was to aid in making predictions.” ¶ [0055]: “In one embodiment, other non-biometric data may also optionally be utilized to aid in forming predictions, such as an aid in determining training data. For example, some enterprises perform psychological assessment studies of employees using common tests such as the Myers Briggs test. For example, introverts may suffer more from anxiety in a training for empathy than extroverts. Conversely, some extroverts may suffer more anxiety doing multitasking in a technical environment. Regardless of whether other forms of data are used, the training/test data generated for previous users and the current user can be selected to provide data from which a prediction engine 112 adapts a training session.”). Regarding claim 11 (depends on claim 8), NEL discloses: wherein a plurality of expected baselines (e.g., ¶ [0049]: “a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds.” ¶ [0051]: “the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level.”) are associated with the mixed-reality experience (e.g., ¶ [0049]: “the training session”; ¶ [0049]: “the complexity of the education session” ¶ [0050]: “the complexity of the educational content”) (¶ [0049]: “As an illustrative example, suppose a cognitive load metric is rising and that an anxiety level metric is also rising. To prevent the user from become over-stressed, the predictive engine 112 proactively adapts the training session to maintain the cognitive mental state metrics within a desired range associated with healthy learning. There are many different ways this can be done. As one example a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds. However, more generally, dynamic aspects, such a rate of change, can be considered. The overall pattern of changes to a set of cognitive mental state metrics can be considered. For example, a user may currently be in a peak-learning mode but a rise in one or more of the cognitive mental state metrics may have trends that suggest that the user's performance will degrade in the near future. In this situation, reducing the complexity of the education session at a point in time before the peak-learning mode ends may be a useful strategy.” ¶ [0050]: “Preventing a “spike” in anxiety/stress may be useful to maintain the overall learning efficiency over the entire learning session. The predictive engine 112 could, for example, have upper and lower threshold values selected with a margin below the absolute minimum and maximum values to provide a cushion for dealing with human response time. For example, suppose an “ideal” learning situation would be to maintain a cognitive load below 90% and an anxiety level below 70%. However, in practice, lower maximum thresholds might be chosen, such as maintaining a cognitive load below 80% and an anxiety level below 60% to reduce the possibility of spiking behavior. In any case, the predictive engine 112 may be implemented in different ways to have an algorithm that monitors the different cognitive mental state metrics and that proactively adapts the education session in response. This may be called “learning in the loop,” in the sense that the complexity of the educational content can be dynamically adapted, based on the biometric data, before the learning experience is substantially degraded. While a complexity level is one example of an adaptation, more generally other types of adaptation could also be performed. For example, a stress reduction technique could be inserted into the session, such as taking a break, playing a game, or doing a breathing exercise.” ¶ [0051]: “FIG. 2 illustrates an example of a general predictive engine 112 in an embodiment in which the predictive engine 112 is part of a module that generates training adaptation commands 111 based on an arbitrary number of different cognitive mental state metrics and FIG. 3 illustrates an example in which the cognitive mental state metrics include metrics generated by classifiers for a cognitive mental load, a motivation level, an anxiety level, and a focus level.”). Regarding claims 1-2 and 4, claims 1-2 and 4 are directed, respectively, to the method implemented by the system of claims 8-9 and 11, and, as such, claims 1-2 and 4 are rejected for the same reasons applied above in the rejection of claims 8-9 and 11, respectively. Regarding claim 7 (depends on claim 6), whereas CHEAZ and SUMANT may not be entirely explicit as to, NEL teaches: responsive to determining that the mental state (¶ [0041]: “determine cognitive state metrics for the user related to learning efficacy.” ¶ [0048]: “analyze the cognitive mental state metrics and determine when the training should be adapted.”) falls within a threshold value of the expected baseline (¶ [0048]: “maintain the education session within an educational training zone having metrics compatible with effective learning.” ¶ [0048]: “maintain the cognitive mental state metrics within a desired range associated with healthy learning. There are many different ways this can be done. As one example a set of upper and lower thresholds can be determined for individual cognitive mental state thresholds.” ¶ [0060]: “within a range conducive to learning.”), initializing the mixed-reality experience (e.g., ¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated. (It would be understood that in an alternate embodiment, the education phase may use augmented reality).”) (¶ [0059]: “FIG. 4 is a flowchart of a general method in accordance with an embodiment. In block 405, the user signs in. In block 410 the biometric data of the user is monitored. In block 415, a training phase is initiated. One aspect of the training phase is that baseline data can be obtained to understand the user's response. This may include, for example, performing one or more surveys or tests to assess a user's biometric responses. For example, the tests may provide general psychometric data regarding how the user responds to different situations. Additionally, the tests may be selected to be similar to aspects of the training. Individual testing is useful because of the variations in individual behavior. For example, one individual may have high levels of background anxiety and stress in their life. Another individual may have low levels of background anxiety and stress in their life. There are a variety of other reasons why individuals may respond differently to a learning environment, including demographic factors like age, level of education, previous experience with VR training, etc.” ¶ [0060]: “In block 420, an adaptive educational simulation is generated that is response to biometric data feedback. Thus, the cognitive mental state metrics are maintained within a range conducive to learning.” NOTE: In other words, not only is the adaptive educational simulation continuously adapted during training based on the determined mental state of the user so as to maintain the expected baseline of a mental state that is conducive to learning, a version of the educational simulation is generated initially in response to the initially mental state of the user in order to obtain an initial educational simulation corresponding to the expected baseline of a mental state for effective learning. ¶ [0061]: “FIG. 5 is a flow chart of a general method of using thresholds in the cognitive state metric(s) to make decisions to adapt the training. In block 505, a training phase is initiated to calibrate thresholds of cognitive mental state metrics. The training phase may, for example, identify a threshold anxiety level predictive of a desirability to either reduce or increase the complexity level or the pacing of the training. For example, if the anxiety level is below a first threshold, it may be predictive that the complexity of the training can be increased, Conversely, if the anxiety level is above a second threshold, it may be predictive that the complexity of the training should be reduced to maintain an effective learning experience for the user.” ¶ [0062]: “In block 510, the virtual reality education phase is initiated. (It would be understood that in an alternate embodiment, the education phase may use augmented reality). In block 515, the biometric data of the user is monitored during the education phase to generate feedback in the form of the cognitive state metrics. This monitoring may be performed on a periodic basis at a rate that is fast compared with human behavioral/cognitive responses to educational training, e.g., once per second.” ¶ [0063]: “The biometric data is used to provide feedback during the educational session to adapt the training. As previously discussed, this may include adaptation such as selecting less complex training session modules, implementing relaxation breaks, changing in pacing or sequence, etc. In decision block 520, the process ends when the training is completed. Otherwise, decisions are made whether the cognitive mental state thresholds are outside of upper or lower thresholds. If they are outside of the thresholds, a decision is made in block 527 to adapt the education phase. Otherwise in block 530, the education phase is continued without adapting it.” NOTE: Again, based on an initially determined anxiety level of the user before the VR training experience is initiated, the initiated VR education phase will have an expected baseline (i.e., a mental state corresponding to effective learning) corresponding to the initially determined anxiety level of the user before the VR training experience is initiated.). Regarding claims 15-16 and 18, claims 15-16 and 18 are directed, respectively, a computer program product comprising one or more computer-readable tangible storage media and program instructions stored on at least of the one or more tangible storage media, the program instructions executable by a processor to cause the processor to perform the method of claims 1-2 and 4, and, as such, claims 15-16 and 18 are rejected for the same reasons applied above in the rejection of claims 1-2 and 4, respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art; Ascertaining the differences between the prior art and the claims at issue; Resolving the level of ordinary skill in the pertinent art; and Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3, 6, 10, 13, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over NEL et al. (US 2020/0082735) in view of CHEAZ et al. (US 2018/0321743, hereinafter “CHEAZ”). Regarding claim 10 (depends on claim 9), whereas NEL may not be entirely explicit as to, CHEAZ teaches: wherein the predicted emotional effect (¶ [0016]: “engagement of the user in the VR experience”) is based on one or more personal interests of the user (¶ [0016]: “enhance the VR experience by de-emphasizing aspects of the experience that are of low interest to the user,”) (¶ [0013]: “Furthermore, real-time feedback can be used in VR experiences to de-emphasize aspects of the experience that the user finds to be less engaging, and can optimize the VR experience to keep the user maximally engaged.” ¶ [0016]: “From the pupil measurements, a software program may extrapolate factors, such as the mental strain level of the user, elements of the user's decision making process (including when the user has made a decision or given up on solving the problem), engagement of the user in the VR experience, and other information about the user. The extrapolated factors may be used to enhance the VR experience by de-emphasizing aspects of the experience that are of low interest to the user, customize the environment and difficulty in response to the user's stress levels, and modify elements, such as dialogue and gameplay, in response to the user's decisions before the user has acted upon those decisions.” ¶ [0027]: “The modification of the virtual environment may also be tied to the user's engagement in the VR experience; for instance, elements of the VR experience that produce low engagement for the user may be de-emphasized in favor of elements that produce high engagement.”). Thus, in order to obtain a more versatile system having the cumulative features and/or functionalities taught by NEL and CHEAZ, it would have been obvious to one of ordinary skill in the art to have modified the system taught by NEL so as to also incorporate basing the predicted emotional effect on one or more personal interests of the user, as taught by CHEAZ. Regarding claim 13 (depends on claim 8), whereas NEL may not be entirely explicit as to, CHEAZ teaches: wherein the determining occurs prior to instantiating the mixed-reality experience (¶ [0024]: “Next, at 208, the ocular biometrics program 116 conducts training to calibrate the pupillometry measurements for a particular user profile in user profile repository 110. This training step may entail showing the user a series of images where the light levels are pre-set, and measuring the effect these light levels have on the user's pupils, in order to establish a baseline value for pupil width and latency against which future measurements can be compared, and to better distinguish the effects of light from the effects of VR-induced brain activity. The images may be shown as part of a discrete training activity that is performed either automatically or at the option of the user, or may be integrated into the boot sequence of client computing device 102 or the opening sequence of VR software program 108 or ocular biometrics program 116. The training step may also take pupil scans and compare the pupil scans against stored pupil scans in the user's profile or in the profiles of other users in order to better distinguish the effects of VR-induced brain activity from external factors, such as age, sleepiness, and anxiety originating from outside the VR environment. This training may be conducted only once, after the creation of the user profile; alternatively, the training may also be conducted whenever a user begins using the headset, or the training may be conducted at regular or continuous intervals throughout the VR experience.”). Thus, in order to obtain a more versatile system having the cumulative features and/or functionalities taught by NEL and CHEAZ, it would have been obvious to one of ordinary skill in the art to have modified the system taught by NEL so as to also incorporate determining that one of the emotions comprising the mental state deviates from a corresponding emotion comprising the expected baseline by a magnitude exceeding the threshold amount prior to instantiating the mixed-reality experience, as taught by CHEAZ. Regarding claim 3 (depends on claim 2), claim 3 is directed to the method implemented by the system of claim 10, and, as such, claim 3 is rejected for the same reasons applied above in the rejection of claim 10. Regarding claim 6 (depends on claim 1), claim 6 is directed to the method implemented by the system of claim 13, and, as such, claim 6 is rejected for the same reasons applied above in the rejection of claim 13. Regarding claim 17 (depends on claim 16), claim 17 is directed to a computer program product comprising one or more computer-readable tangible storage media and program instructions stored on at least of the one or more tangible storage media, the program instructions executable by a processor to cause the processor to perform the method of claim 3, and, as such, claim 17 is rejected for the same reasons applied above in the rejection of claim 3. Regarding claim 20 (depends on claim 15), claim 20 is directed to a computer program product comprising one or more computer-readable tangible storage media and program instructions stored on at least of the one or more tangible storage media, the program instructions executable by a processor to cause the processor to perform the method of claim 6, and, as such, claim 20 is rejected for the same reasons applied above in the rejection of claim 6. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over NEL et al. (US 2020/0082735) in view of CHEAZ et al. (US 2018/0321743), further in view of OSOTIO et al. (US 2019/0096105, hereinafter “OSOTIO”, which incorporates OSOTIO et al. (US 2018/0101776, hereinafter “OSOTIO ‘776”) in its entirety by reference.). Regarding claim 14 (depends on claim 13), whereas NEL and CHEAZ may not be entirely explicit as to, OSOTIO teaches: responsive to determining that the mental state does match the expected baseline (e.g., ¶ [0060]: “identify which bots may have relevant augmented content that would be of interest to the user.” ¶ [0060]: “identify” … “the emotional state of the user,” ¶ [0060]: “Based on this information,” … “decides whether to present content,” ¶ [0061]: “In order to decide whether augmented content should be presented, the bot service 522 can evaluate the emotional intelligence graph 520 to ascertain whether the user would be receptive to augmented content. This is performed by ascertaining the emotional state of the user and, using the emotional insights for the user, identifying whether that emotional state translates into a receptiveness or non-receptiveness to have the augmented content placed in the AR/VR environment.” NOTE: In other words, the “expected baseline” for content of interest to the user is that the user is “receptive” to having augmented “content of interest” presented, or placed, in the AR/VR environment. Thus, by determining that the user is “non-receptive” to being presented “relevant augmented content that would be of interest to the user”, the system is determining the emotional state of the user (“non-receptive”) does not match the expected baseline (“receptive”) to AR content that is of interest.), disabling the user from accessing the mixed-reality experience (e.g., ¶ [0061]: “This is performed by ascertaining the emotional state of the user and, using the emotional insights for the user, identifying whether that emotional state translates into a receptiveness or non-receptiveness to have the augmented content placed in the AR/VR environment. As one representative example, when the user is experiencing negative emotions such as anger or frustration, some embodiments of the system will not present augmented content”) (¶ [0055]: “The emotional state of the user can be derived in a variety of ways. For example, U.S. application Ser. No. 15/291,305 entitled “Extracting an Emotional State from Device Data” (incorporated by reference herein in its entirety) describes a suitable platform (i.e., emotional intelligence platform 518) for extracting the emotional state of the user.” ¶ [0059]: “As described in application Ser. No. 15/291,305 much and/or all of the sensory data produced by sensors 514 is also used by the emotional intelligence platform 518 to derive the user's emotional state and/or emotional insights. If this is the case, it may be that emotional intelligence platform 518 can also provide the sensory data to the sensory data graph 516 and/or provide the information in the sensory data graph 516 as part of the emotional intelligence data graph 520 without the need for a separate data graph. In either case, the system has access to the sensory data being experienced by the user and can make decisions based on the sensory data.” ¶ [0060]: “A bot service 522 can monitor user interactions either directly or through analysis of the information in one or more data graphs (e.g., 504, 506, 508, 510, 516, 520) and identify which bots may have relevant augmented content that would be of interest to the user. The monitoring of the data graphs allows the bot service 522 to identify what actions the user is engaged in, the emotional state of the user, what the user is experiencing, the user's past history and so forth as described herein. Based on this information, the bot service 522 decides whether to present content, which bots have content of interest, which content should be presented, and if content should be presented, when and how the content should be presented.” ¶ [0061]: “In order to decide whether augmented content should be presented, the bot service 522 can evaluate the emotional intelligence graph 520 to ascertain whether the user would be receptive to augmented content. This is performed by ascertaining the emotional state of the user and, using the emotional insights for the user, identifying whether that emotional state translates into a receptiveness or non-receptiveness to have the augmented content placed in the AR/VR environment. As one representative example, when the user is experiencing negative emotions such as anger or frustration, some embodiments of the system will not present augmented content unless augmented content can be identified that will help alleviate the source of the negative emotion (e.g., inability to perform a task, access desired information, and so forth). As another representative example, in some embodiments, when the user's emotional state indicates the user could use help, augmented content that is likely to help the user will be presented. As another example, in some embodiments, when the user's emotional state is anything other than negative, augmented content will be presented. As a further example, in some embodiments the emotional state is not used in determining whether to send augmented content to the user.” ¶ [0078]: “If the user emotional state and/or insights indicates the user would not be receptive to augmented content, in some embodiments, the flow diagram ends and waits until the user's emotional state and/or insights indicate that the user would be receptive to augmented content. If the user would be receptive (or not non-receptive) to augmented content, the method proceeds to operation 608 and 610 where relevant bots are identified and appropriate bots are selected. One or more bots can be selected by operations 608 and 610. FIG. 8 explains operations 608 and 610 in greater detail.” ¶ [0015] of OSOTIO ‘776: “Devices collect a wide variety of data with the permission of the user, many of which contain cues as to the emotional state of the user. For example, biometrics are often related to emotional state and include such data as heart rate, skin temperature, respiration, and so forth, which can be collected by wearables such as a watch, fitness tracker, band or other such device.” ¶ [0017] of OSOTIO ‘776: “Once an emotional state has been extracted from data, the emotional state is used to select at least one action to be performed.”). Thus, in order to obtain a system having the cumulative features and/or functionalities taught by NEL, CHEAZ and OSOTIO, it would have been obvious to one of ordinary skill in the art to have modified the AR/VR system for modifying AR/VR content via biometric emotional/mental state feedback taught by the combination of NEL and CHEAZ so as to incorporate disabling the user from accessing the mixed-reality experience in response to determining that the emotional state does match the expected baseline, as taught by OSOTIO. Response to Arguments Applicant's arguments filed October 23, 2025 have been fully considered but they are not persuasive. Applicant's arguments filed October 23, 2025 with respect to claims 1-4, 6-11, 13-18 and 20 have been considered but are moot in view of the new ground(s) of rejection. Conclusion At present, it is not apparent to the examiner which part of the application could serve as a basis for new and allowable claims. However, should the applicant nevertheless regard some particular matter as patentable, the examiner encourages applicant to appropriately amend the claims to include such matter and to indicate in the REMARKS the difference(s) between the prior art and the claimed invention as well as the significance thereof. Furthermore, should applicant decide to amend the claims, examiner respectfully requests that the applicant please indicate in the REMARKS from which page(s), line(s) or claim(s) of the originally filed application that any amendments are derived. See MPEP § 2163(II)(A) (There is a strong presumption that an adequate written description of the claimed invention is present in the specification as filed, Wertheim, 541 F.2d at 262, 191 USPQ at 96; however, with respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims.). A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 USC § 133). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT PEREN who can be reached by telephone at (571) 270-7781, or via email at vincent.peren@uspto.gov. The examiner can normally be reached on Monday-Friday from 10:00 A.M. to 6:00 P.M. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KING POON, can be reached at telephone number (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /VINCENT PEREN/ Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Aug 23, 2022
Application Filed
Feb 29, 2024
Non-Final Rejection — §102, §103, §112
Jun 04, 2024
Examiner Interview Summary
Jun 04, 2024
Applicant Interview (Telephonic)
Jun 06, 2024
Response Filed
Aug 30, 2024
Final Rejection — §102, §103, §112
Nov 05, 2024
Response after Non-Final Action
Nov 13, 2024
Response after Non-Final Action
Nov 27, 2024
Request for Continued Examination
Dec 05, 2024
Response after Non-Final Action
Feb 07, 2025
Non-Final Rejection — §102, §103, §112
May 09, 2025
Response Filed
Aug 23, 2025
Final Rejection — §102, §103, §112
Oct 13, 2025
Interview Requested
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 21, 2025
Examiner Interview Summary
Oct 23, 2025
Response after Non-Final Action
Nov 24, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §102, §103, §112
Mar 23, 2026
Interview Requested
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592017
Rendering XR Avatars Based on Acoustical Features
2y 5m to grant Granted Mar 31, 2026
Patent 12586282
AVATAR COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12555314
THREE-DIMENSIONAL SHADING METHOD, APPARATUS, AND COMPUTING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12555296
ADAPTING SIMULATED CHARACTER INTERACTIONS TO DIFFERENT MORPHOLOGIES AND INTERACTION SCENARIOS
2y 5m to grant Granted Feb 17, 2026
Patent 12541913
METHOD AND APPARATUS FOR REBUILDING RELIGHTABLE IMPLICIT HUMAN BODY MODEL
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.2%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 382 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month