Prosecution Insights
Last updated: April 19, 2026
Application No. 19/058,412

SYSTEMS AND METHODS FOR COLLABORATIVE TRAINING IN A GRAPHICALLY SIMULATED VIRTUAL REALITY (VR) ENVIRONMENT

Non-Final OA §101§102§103§112§DP
Filed
Feb 20, 2025
Examiner
SITTNER, MATTHEW T
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Truist Bank
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
512 granted / 890 resolved
+5.5% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
922
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 890 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on XXXXXXXXXXXXXX has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims X are canceled. Claims X are new. Claims 1-20 are pending and have been examined. This action is in reply to the papers filed on 02/20/2025 (effective filing date 10/24/2022). Information Disclosure Statement The information disclosure statement(s) submitted: 02/20/2025, has/have been considered by the Examiner and made of record in the application file. Amendment The present Office Action is based upon the original patent application filed on xxx as modified by the amendment filed on xxx. Terminal Disclaimer The terminal disclaimer filed on xxx disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of US Pat. No. xxxx has been reviewed and has been placed in the file. Examiner acknowledges Applicant’s filed Terminal Disclaimer to prior art patent McCauley et al. US Pat. No. 5,930,775. A terminal disclaimer may be filed to overcome or obviate a nonstatutory double patenting rejection (37 CFR 1.321; MPEP 706.02; 1490). Double Patenting - Withdrawn The double patenting rejection is withdrawn per the filed terminal disclaimer noted above. Reasons For Allowance Prior-Art Rejection withdrawn Claims xxx are allowed. The closest prior art (See PTO-892, Notice of References Cited) does not teach the claimed: The invention teaches… and the prior-art teaches…, however, the prior-art does not teach… The closest prior-art (xxx) teach the features as disclosed in Non-final Rejection (xxxx), however, these cited references do not teach and the prior-art does not teach at least the following combination of features and/or elements: determining, at a second time after associating the information corresponding to the first loyalty card with the logged location, that a second user computing device is located within a specified distance of the logged location using a second positioning system of the second user computing device; in response to determining that the second user computing device is located within the specified distance of the logged location of the first user computing device at the first time of detecting: retrieving information corresponding to a second loyalty card, the second loyalty card being associated with the merchant and the second user computing device; and displaying, by the second user computing device, data describing the second loyalty card. Claim Rejections - 35 USC §101 - Withdrawn Per Applicant’s amendments and arguments and considering new guidance in the MPEP, the rejections are withdrawn. Specifically, in Applicant’s Remarks (dated 03/14/2017, pgs. 8-11), Applicant traverses the 35 USC §101 rejections arguing that the amended claims recite new limitations that are not abstract, amount to significantly more, are directed to a practical application, etc… For example, Applicant argues…. In support of their arguments, Applicant cites to the following recent Fed. Cir. court cases (i.e., Alice Corp. v. CLS Bank Int’l, SRI Int’l, Inc. v. Cisco Systems, Inc., Ultramercial, Inc. v. Hulu, LLC, Berkheimer, Core Wireless, McRO, Enfish, Bascom, DDR, etc…). Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter because the claimed invention is directed to an abstract idea without significantly more. These claims recite a method, system/apparatus, and computer readable medium for collaborative training in a graphically simulated virtual reality (VR) environment. Claim 1 recites [a] method, comprising: causing, by a processor, display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants; causing, by the processor, display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants; causing, by the processor, display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants; receiving, by the processor from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions; causing display, in the 3D VR environment on the first and second participant devices, a training room for the first training session; deploying, by the processor in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants; determining, by the processor, completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment; and sending, by the processor to the first and second participant devices, a respective digital certificate for completion of the first training session. The claims are being rejected according to the 2019 Revised Patent Subject Matter Eligibility Guidance (Federal Register, Vol. 84, No. 5, p. 50-57 (Jan. 7, 2019)). Step 1: Does the Claim Fall within a Statutory Category? Yes. Claims 1-18 recite a method and, therefore, are directed to the statutory class of a process. Claim 20 recites a system/apparatus and, therefore, are directed to the statutory class of machine. Claim 19 recites a non-transitory computer readable medium/computer product and, therefore, are directed to the statutory class of a manufacture. Step 2A, Prong One: Is a Judicial Exception Recited? Yes. The following tables identify the specific limitations that recite an abstract idea. The column that identifies the additional elements will be relevant to the analysis in step 2A, prong two, and step 2B. Claim 1: Identification of Abstract Idea and Additional Elements, using Broadest Reasonable Interpretation Claim Limitation Abstract Idea Additional Element 1. A method, comprising: No additional elements are positively claimed. causing, by a processor, display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants; This limitation includes the step(s) of: causing, by a processor, display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). causing, by a processor, display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices… causing, by the processor, display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants; This limitation includes the step(s) of: causing, by the processor, display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). causing, by the processor, display of a first avatar on a first participant device of the plurality of participant devices… causing, by the processor, display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants; This limitation includes the step(s) of: causing, by the processor, display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). causing, by the processor, display of a second avatar on a second participant device of the plurality of participant devices… receiving, by the processor from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions; This limitation includes the step(s) of: receiving, by the processor from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). receiving, by the processor from the first and second participant devices, respective indications of input… causing display, in the 3D VR environment on the first and second participant devices, a training room for the first training session; This limitation includes the step(s) of: causing display, in the 3D VR environment on the first and second participant devices, a training room for the first training session. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). causing display, in the 3D VR environment on the first and second participant devices, a training room… deploying, by the processor in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants; This limitation includes the step(s) of: deploying, by the processor in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). deploying, by the processor in the training room, an artificial intelligence (AI)-based virtual agent… determining, by the processor, completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment; and This limitation includes the step(s) of: determining, by the processor, completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). determining, by the processor, completion of the first training session… sending, by the processor to the first and second participant devices, a respective digital certificate for completion of the first training session. This limitation includes the step(s) of: sending, by the processor to the first and second participant devices, a respective digital certificate for completion of the first training session. But for the processor and/or participant devices, this limitation is directed to processing and/or communicating known information (e.g., displaying and transmitting known information) to facilitate collaborative training in a graphically simulated virtual reality (VR) environment which may be categorized as any of the following: certain method of organizing human activity – commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). sending, by the processor to the first and second participant devices, a respective digital certificate… As shown above, under Step 2A, Prong One, the claims recite a judicial exception (an abstract idea). The claims are directed to the abstract idea of implementing collaborative training in a graphically simulated virtual reality (VR) environment, which, pursuant to MPEP 2106.04, is aptly categorized as a method of organizing human activity. Therefore, under Step 2A, Prong One, the claims recite a judicial exception. Next, the aforementioned claims recite additional functional elements that are associated with the judicial exception, including: device for display a 3D VR environment. Examiner understands these limitations to be insignificant extrasolution activity. (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Cf. Diamond v. Diehr, 450 U.S. 175, 191-192 (1981) ("[I]nsignificant post-solution activity will not transform an unpatentable principle in to a patentable process.”). The aforementioned claims also recite additional technical elements including: a “processor” and “device” to execute the method and apparatus, and a “non-transitory computer-readable storage medium” for storing executable instructions. These limitations are recited at a high level of generality and appear to be nothing more than generic computer components. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 134 S. Ct. at 2358, 110 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984. Step 2A, Prong Two: Is the Abstract Idea Integrated into a Practical Application? No. The judicial exception is not integrated into a practical application. The additional elements listed above that relate to computing components are recited at a high level of generality (i.e., as generic components performing generic computer functions such as communicating, receiving, processing, analyzing, and outputting/displaying data) such that they amount to no more than mere instructions to apply the exception using generic computing components. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Additionally, the claims do not purport to improve the functioning of the computer itself. There is no technological problem that the claimed invention solves. Rather, the computer system is invoked merely as a tool. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, these claims are directed to an abstract idea. Furthermore, looking at the elements individually and in combination, under Step 2A, Prong Two, the claims as a whole do not integrate the judicial exception into a practical application because they fail to: improve the functioning of a computer or a technical field, apply the judicial exception in the treatment or prophylaxis of a disease, apply the judicial exception with a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or apply the judicial exception beyond generally linking the use of the judicial exception to a particular technological environment. Rather, the claims merely use a computer as a tool to perform the abstract idea(s), and/or add insignificant extra-solution activity to the judicial exception, and/or generally link the use of the judicial exception to a particular technological environment. Step 2B: Does the Claim Provide an Inventive Concept? Next, under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Simply put, as noted above, there is no indication that the combination of elements improves the functioning of a computer (or any other technology), and their collective functions merely provide conventional computer implementation. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements relating to computing components amount to no more than applying the exception using a generic computing components. Mere instructions to apply an exception using a generic computing component cannot provide an inventive concept. Furthermore, the broadest reasonable interpretation of the claimed computer components (i.e., additional elements) includes any generic computing components that are capable of being programmed to communicate, receive, send, process, analyze, output, or display data. Furthermore, Applicant’s Specification (PGPub. 2025/0191097 [0034]) refers to a general computer system, but they do not include any technically-specific computer algorithm or code. Additionally, pursuant to the requirement under Berkheimer, the following citations are provided to demonstrate that the additional elements, identified as extra-solution activity, amount to activities that are well-understood, routine, and conventional. See MPEP 2106.05(d). Capturing an image (code) with an RFID reader. Ritter, US Patent No. 7734507 (Col. 3, Lines 56-67); “RFID: Riding on the Chip” by Pat Russo. Frozen Food Age. New York: Dec. 2003, vol. 52, Issue 5; page S22. Receiving or transmitting data over a network. Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014). Storing and retrieving information in memory. Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. Outputting/Presenting data to a user. Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015); MPEP 2106.05(g)(3). Using a machine learning model to determine user segment characteristics for an ad campaign. https://whites.agency/blog/how-to-use-machine-learning-for-customer-segmentation/. Thus, taken alone and in combination, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea), and are ineligible under 35 USC 101. Independent system/apparatus claim 20 and CRM claim 19 also contains the identified abstract ideas, with the additional elements of a processor and storage medium, which are a generic computer components, and thus not significantly more for the same reasons and rationale above. Dependent claims 2-18 further describe the abstract idea. The additional elements of the dependent claims fail to integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea. Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible. As such, the claims are not patent eligible. Therefore, the Office finds no improvements to another technology or field, no improvements to the function of the computer itself, and no meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. Therefore, based on the two-part Alice Corp. analysis, there are no limitations in any of the claims that transform the exception (i.e., the abstract idea) into a patent eligible application. Claim Rejections - Not an Ordered Combination None of the limitations, considered as an ordered combination provide eligibility, because taken as a whole, the claims simply instruct the practitioner to implement the abstract idea with routine, conventional activity. Claim Rejections - Preemption Allowing the claims, as presently claimed, would preempt others from implementing collaborative training in a graphically simulated virtual reality (VR) environment. Furthermore, the claim language only recites the abstract idea of performing this method, there are no concrete steps articulating a particular way in which this idea is being implemented or describing how it is being performed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739. 19/058,412 – Claim 1. Kevan 2012/0123758 teaches A method, comprising: causing, by a processor, display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants (Kevan 2012/0123758 [0008 - displaying the user avatar that represents the user on a display device … user avatar is positioned in the three-dimensional virtual environment] In one embodiment, a method of prompting a user of a computing device to choose how a user avatar should interact with a three-dimensional virtual environment in response to a critical incident includes displaying the three-dimensional virtual environment on a display device. The three-dimensional virtual environment graphically represents a physical environment. The method further includes displaying the user avatar that represents the user on a display device. The user avatar is positioned in the three-dimensional virtual environment. The method further includes simulating the critical incident in the three-dimensional virtual environment, and prompting the user to choose how the user avatar should interact with the three-dimensional virtual environment in response to the simulated critical incident.); causing, by the processor, display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants (Kevan 2012/0123758 [0033 - each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees] As illustrated by the training system of FIG. 2, multiple users remotely located from one another may participate in a training scenario at the same time. For example, each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees. Further, the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees.); causing, by the processor, display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants (Kevan 2012/0123758 [0033 - each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees] As illustrated by the training system of FIG. 2, multiple users remotely located from one another may participate in a training scenario at the same time. For example, each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees. Further, the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees.); receiving, by the processor from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions (Kevan 2012/0123758 [0017 - trainee computing device 102 may include a processor 230, input/output hardware 232, network interface hardware 234, a data storage component 236 (which may store training modules for use in critical incident response training)] As also illustrated in FIG. 1, the trainee computing device 102 may include a processor 230, input/output hardware 232, network interface hardware 234, a data storage component 236 (which may store training modules for use in critical incident response training), and a memory component 240. The memory component 240 may be configured as volatile and/or nonvolatile memory and, as such, may include random access memory (including SRAM, DRAM, and/or other types of random access memory), flash memory, registers, compact discs (CD), digital versatile discs (DVD), and/or other types of storage components. Additionally, the memory component 240 may be configured to store operating logic 242 and a training module 244 (each of which may be embodied as a computer program, firmware, or hardware, as an example). A local interface 246 is also included in FIG. 1 and may be implemented as a bus or other interface to facilitate communication among the components of the trainee computing device 102. [0029 - virtual environment training may be initiated by the user clicking on a start button] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.); causing display, in the 3D VR environment on the first and second participant devices, a training room for the first training session (Kevan 2012/0123758 [0021 - physical environment represented by the three-dimensional virtual environment may include physical environments such as, for example, a business, an airport, a train station, a subway station, a bus station, a university, a college, a school, a portion of or an entire city, or any other physical environment in which critical incident response may be required … building designs may provide locations within the virtual environment such as offices, classrooms, storerooms, hallways, stairwells, elevators, bathrooms, meeting rooms, food preparation areas] The training module 244 may be configured so that a three-dimensional virtual environment that is displayed on a display device, such as a monitor, of the trainee computing device 102. The displayed three-dimensional virtual environment may be a graphical representation of a physical environment in which the user of the trainee computing device 102 is to be trained to respond to critical incidents. The physical environment represented by the three-dimensional virtual environment may include physical environments such as, for example, a business, an airport, a train station, a subway station, a bus station, a university, a college, a school, a portion of or an entire city, or any other physical environment in which critical incident response may be required. The training module 244 may include one or more buildings or building designs (e.g., Computer-aided drafting drawings) customized to the physical environment in which the user may be exposed to and/or required to respond to a critical incident at such location. The user or an administrator may input customized information pertaining to the actual physical environment, including the number of buildings, the type of buildings, the floor plans of the buildings, the layout of rooms and hallways in the buildings, etc. The input information may be substantially replicated in the virtual environment. The buildings may form a campus setting, an airport, a train station or depot, a subway station or depot, a bus station or depot, at least a portion of or an entire city, a restaurant, or a military facility, among other settings. Customizing the virtual environment to the physical environment likely to be encountered by a trainee may enhance the effectiveness of the training. The building designs may provide locations within the virtual environment such as offices, classrooms, storerooms, hallways, stairwells, elevators, bathrooms, meeting rooms, food preparation areas, and any other space allocations where a critical incident may occur.); deploying, by the processor in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants (Kevan 2012/0123758 [0018 - input/output hardware 232 may include a monitor, keyboard, mouse, printer, camera, microphone, speaker, and/or other device for receiving, sending, and/or presenting data] The processor 230 may include any processing component configured to receive and execute instructions (such as from the data storage component 236 and/or memory component 240). The input/output hardware 232 may include a monitor, keyboard, mouse, printer, camera, microphone, speaker, and/or other device for receiving, sending, and/or presenting data. The network interface hardware 234 may include any wired or wireless networking hardware, such as a modem, LAN port, wireless fidelity (Wi-Fi) card, WiMax card, mobile communications hardware, and/or other hardware for communicating with other networks and/or devices.); determining, by the processor, completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment (Kevan 2012/0123758 [0029 - ser may proceed through a training sequence until all stages of the training sequence have been completed] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice. [0033 - multiple users remotely located from one another may participate in a training scenario at the same time … each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees … the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees] As illustrated by the training system of FIG. 2, multiple users remotely located from one another may participate in a training scenario at the same time. For example, each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees. Further, the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees.); and sending, by the processor to the first and second participant devices, a respective digital certificate for completion of the first training session (Kevan 2012/0123758 [0029 - user may proceed through a training sequence until all stages of the training sequence have been completed] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “artificial intelligence (AI)-based virtual agent that conducts conversational communication” features, however, Yerli 2021/0201588 teaches these features as follows (Yerli 2021/0201588 [0019 - communications are enabled between human users, artificial reality users, or combinations thereof, through communication channels enabling communications through sharing of audio, video, text, and hand or facial gestures or movements … for example, the persistent virtual world system may comprise a plurality of artificial intelligence virtual assistants that may be represented by virtual avatars with which human users may communicate in augmented or virtual reality] In some embodiments, communications are enabled between human users, artificial reality users, or combinations thereof, through communication channels enabling communications through sharing of audio, video, text, and hand or facial gestures or movements. Thus, for example, the persistent virtual world system may comprise a plurality of artificial intelligence virtual assistants that may be represented by virtual avatars with which human users may communicate in augmented or virtual reality.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Yerli 2021/0201588. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Kevan 2012/0123758 may not expressly disclose the “digital certificate” features, however, Vestemean 2022/0094739 teaches (Vestemean 2022/0094739 [0272 - digital certificate] In one embodiment, the digital certificate template 35 includes, but is not limited to, certificates of attendance and/or completion and/or achievement and/or licensing, comprising: Continuing Education Certificates, Certificates of Accomplishment, Certificates of Achievement, Certificates of Attendance, Certificates of Completion, Diplomas (e.g., grade school, middle school, high-school, university, etc.), Leadership Awards, Membership Certificates, Sports Awards, Sales Awards, Training Program Certification, Sponsor Acknowledgements, Licenses, Professional Licenses (e.g., accounting, aviation, building contractor, engineering, legal, medical, real-estate, etc.), Military (e.g., rank, award ribbons (e.g., purple heart, combat infantry badge, etc.), etc. However the present invention is not limited to these embodiments and other embodiments and other digital certificate templates can be used to practice the invention.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Vestemean 2022/0094739. One of ordinary skill in the art would have been motivated to do so to utilize well known digital certificate features which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 19. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a processor, cause the processor to: cause display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants; cause display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants; cause display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants; receive, from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions; cause display, in the 3D VR environment on the first and second participant devices, a training room for the first training session; deploy, in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants; determine completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment; and send, to the first and second participant devices, a respective digital certificate for completion of the first training session. 19/058,412 – Claim 20. An apparatus, comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: cause display of a three-dimensional (3D) virtual reality (VR) environment on a plurality of participant devices, each participant device associated with a respective participant of a plurality of participants; cause display of a first avatar on a first participant device of the plurality of participant devices, wherein the first avatar is associated with a first participant of the plurality of participants; cause display of a second avatar on a second participant device of the plurality of participant devices, wherein the second avatar is associated with a second participant of the plurality of participants; receive, from the first and second participant devices, respective indications of input selecting a first training session of a plurality of training sessions; cause display, in the 3D VR environment on the first and second participant devices, a training room for the first training session; deploy, in the training room, an artificial intelligence (AI)-based virtual agent that conducts conversational communication with training room participants; determine completion of the first training session via virtual interaction of the virtual agent, the first avatar, and the second avatar in the training room in the 3D VR environment, wherein the virtual agent, the first avatar, and the second avatar are displayed in the training room in the 3D VR environment; and send, to the first and second participant devices, a respective digital certificate for completion of the first training session. Claims 19 and 20, have similar limitations as of Claim 1, therefore they are REJECTED under the same rationale as Claim 1. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in further view of King et al. 2023/0290263. 19/058,412 – Claim 2. Kevan 2012/0123758 further teaches The method of claim 1, further comprising prior to receiving the input selecting the first training session: determining, by the processor for the first and second participants, a respective subset of the plurality of training sessions (Kevan 2012/0123758 [0015 - Training modules may be displayed on the computer of a trainee, such as an employee, in a three-dimensional virtual environment and may be accessed through an administrator computing device, such as an employer mainframe, personal computer, or other computing device] Embodiments are directed to a training module system comprising a training module for critical incident response in a virtual environment. Emergency lockdown procedures may be incorporated into the training modules, creating virtual environment scenarios for testing a trainee, such as an employee. The training module system may comprise a computer and a display device that allows a user to view a virtual environment including graphical representations of the user, potential emergency responders, other employees or trainees, and virtual site maps related to the workplace in a virtual environment. Training modules may be displayed on the computer of a trainee, such as an employee, in a three-dimensional virtual environment and may be accessed through an administrator computing device, such as an employer mainframe, personal computer, or other computing device. The user's computer may be linked to an Emergency Response Group (e.g., fire and police department, designated health and safety officers, etc.) though the training module system.). Kevan 2012/0123758 may not expressly disclose the “subset of the plurality of training sessions” features, however, King et al. 2023/0290263 teaches these features as follows (King et al. 2023/0290263 [0082 - listing of training sessions may include all available training sessions or may include a subset of the available training sessions] Returning to the example process 800 of FIG. 8, at subprocess 806, a training option selection is received. For example, the customer may submit their selection to the customer application 132 via the customer system 130. In a first example, the customer may select the first option 914 of FIG. 9 to allocate training sessions to users. For example, at subprocess 808, training session and user allocations may be collected. For example, the customer application 132 may output a listing of training sessions that the customer may select. The listing of training sessions may include all available training sessions or may include a subset of the available training sessions, for example, based on a selected machine. FIG. 10 illustrates a GUI 1000 including a training session allocation window 1002, as presented herein. The training session allocation window 1002 includes a first portion 1010 (“Asphalt Compactor Scenarios”) and a second portion 1020 (“Available Users”). The first portion 1010 indicates the one or more training sessions available for a machine. In the example of FIG. 10, the first portion 1010 includes four example training sessions associated with an asphalt compactor (e.g., the compacting machine 906 of FIG. 9). As shown in FIG. 10, a first scenario 1012 of the available training sessions are selected (e.g., as indicated by the filled-in circle next to the first scenario 1012).). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by King et al. 2023/0290263. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of King et al. 2023/0290263; in view of Breznitz 2014/0234826. 19/058,412 – Claim 3. Kevan 2012/0123758 further teaches The method of claim 2, further comprising prior to receiving the input selecting the first training session: outputting, by the processor in the 3D VR environment on the first and second participant devices, a respective timetable of the respective subset of the plurality of training sessions for the first and second participants (Kevan 2012/0123758 [0029 - training module 244 may time how long it takes the user to … training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to …] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “timetable of the respective subset of the plurality of training sessions” features, however, Breznitz 2014/0234826 teaches these features as follows (Breznitz 2014/0234826 [0104 - Training module 120 then either builds or retrieves a training program including multiple training sessions each session including one or more pairs of training exercises 36 and may build and present a recommended timetable 38 for carrying out the training sessions] Training module 120 receives the evaluated personal reading pace and other required personal details of the user 35, such as the user's age, language level etc. Training module 120 then either builds or retrieves a training program including multiple training sessions each session including one or more pairs of training exercises 36 and may build and present a recommended timetable 38 for carrying out the training sessions. The user may adjust one or more features of the timetable 38 using a special GUI that allows him to change days and/or hours of each of the sessions.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Breznitz 2014/0234826. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 4. Kevan 2012/0123758 further teaches The method of claim 3, wherein the timetable displays at least one attribute of each training session in the respective subset of the plurality of training sessions for the first and second participants (Kevan 2012/0123758 [0023] The user or an administrator may input data to personalize the attributes of the user avatar. For example, the user may be able to input attributes, such as, a name, a hair color, a skin color, a height, a weight, a body type, an eye color, etc., that will be reflected in the graphical representation of the user avatar displayed to the user. The use of an avatar that is personalized to represent a user of the trainee computing device 102 in conjunction with the three-dimensional virtual environment may enhance the training of the user to respond to a critical incident. The critical incident response training of a user that involves a virtual environment and an avatar representing the user may be enhanced because of the poteus effect, which may cause the user to believe he or she is part of a real situation, thereby causing the user to enhance his or her reaction to the situation and retention of the training. [0026] A question and answer component of a critical incident response test may be incorporated into the training module 244. The question and answer component may display a question to the user on a display device, prompt the user to input an answer to the question, and provide feedback to the user in response to the user's answer. [0029] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “attribute” features, however, Breznitz 2014/0234826 teaches these features as follows (Breznitz 2014/0234826 [0104 - receives the evaluated personal reading pace and other required personal details of the user 35, such as the user's age, language level etc] Training module 120 receives the evaluated personal reading pace and other required personal details of the user 35, such as the user's age, language level etc. Training module 120 then either builds or retrieves a training program including multiple training sessions each session including one or more pairs of training exercises 36 and may build and present a recommended timetable 38 for carrying out the training sessions. The user may adjust one or more features of the timetable 38 using a special GUI that allows him to change days and/or hours of each of the sessions.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Breznitz 2014/0234826. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Caron et al. 2021/0127004. 19/058,412 – Claim 5. Kevan 2012/0123758 further teaches The method of claim 1, wherein the first participant is a customer service trainee, wherein the second participant is at least one of a trainer, a supervisor, and a manager (Kevan 2012/0123758 [0015 - training module system may comprise a computer and a display device that allows a user to view a virtual environment including graphical representations of the user, potential emergency responders, other employees or trainees, and virtual site maps related to the workplace in a virtual environment] Embodiments are directed to a training module system comprising a training module for critical incident response in a virtual environment. Emergency lockdown procedures may be incorporated into the training modules, creating virtual environment scenarios for testing a trainee, such as an employee. The training module system may comprise a computer and a display device that allows a user to view a virtual environment including graphical representations of the user, potential emergency responders, other employees or trainees, and virtual site maps related to the workplace in a virtual environment. Training modules may be displayed on the computer of a trainee, such as an employee, in a three-dimensional virtual environment and may be accessed through an administrator computing device, such as an employer mainframe, personal computer, or other computing device. The user's computer may be linked to an Emergency Response Group (e.g., fire and police department, designated health and safety officers, etc.) though the training module system. [0028 - avatars representing emergency responders, health and safety officers, and teachers, for example, may also be pre-selected and assigned before training begins] As an example and not a limitation, a trainee may log onto a computer for entry into a particular training module. The trainee may choose the avatar that she wants to represent her. A selection of avatars from which to choose from may be provided to the user. For example, each avatar may be presented in a box and when the trainee clicks on the box to select one of the available avatars, the non-selected avatars will no longer be available for use during the training session. According to one embodiment, avatars representing emergency responders, health and safety officers, and teachers, for example, may also be pre-selected and assigned before training begins.). Kevan 2012/0123758 may not expressly disclose the “customer service trainee…” features, however, Caron et al. 2021/0127004 teaches these features as follows (Caron et al. 2021/0127004 [0001 - simulated training of agents or customer service representatives (CSR), and more particularly to objectively presenting scenarios and measuring various aspects of CSR interaction in those scenarios] The present disclosure generally relates to simulated training of agents or customer service representatives (CSR), and more particularly to objectively presenting scenarios and measuring various aspects of CSR interaction in those scenarios. [0002 - human trainer may apply the same subjective characteristics for evaluating the various CSRs] Training of agents or customer service representatives (CSR) may be time-consuming and subjective. Different human trainers may impose personal styles, preferences, and biases during training resulting a varied training experience. Further, each human trainer may apply the same subjective characteristics for evaluating the various CSRs. Further, evaluation of other ‘soft-skills’ such as empathy and understanding, when evaluated by a human trainer, may also be evaluated based on subjective interpretations. Current human-based training and evaluation of CSRs has resulted in inconsistent training and inconsistent evaluation. [0013 - generating a simulated caller dialog for a scenario for testing a customer service representative (CSR), the simulated caller dialog including a caller intended issue specific to the scenario; a means for presenting at least a portion of the simulated caller dialog to the CSR, the portion including the caller intended issue; a means for receiving a CSR response to the at least the portion of the simulated caller dialog] Another general aspect includes a system including a means for generating a simulated caller dialog for a scenario for testing a customer service representative (CSR), the simulated caller dialog including a caller intended issue specific to the scenario; a means for presenting at least a portion of the simulated caller dialog to the CSR, the portion including the caller intended issue; a means for receiving a CSR response to the at least the portion of the simulated caller dialog, the CSR response including a CSR interpretation of the caller intended issue in the at least the portion of the simulated caller dialog; a means for generating an understanding determination result based on an intent determination recognition score generated by an intent determination recognition model, the understanding determination result indicating whether the CSR in the CSR response correctly or incorrectly identified the caller intended issue; a means for generating a CSR score for the scenario based on the understanding determination result; and a means for recording the CSR score in a database. [0023 - CSR trainee may interact with a device, such as a headset, earpiece, computer screen, or smartphone, to listen to the words used in a simulated caller conversation] The various objectively measured parameters include intent determination, emotional state based on facial recognition, empathy and sentiment keyword recognition/usage based on keyword analysis, and expected interaction-keyword usage analysis. The objective measurements occur in response to presenting a CSR trainee with audio of a simulated caller using one or more scenarios, and then capturing the CSR trainee's words and facial expressions for analysis. Specifically, the CSR trainee may interact with a device, such as a headset, earpiece, computer screen, or smartphone, to listen to the words used in a simulated caller conversation. The CSR trainee's response is then analyzed and the CSR trainee may be provided with suggestions for improving engagement with the simulated caller.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Caron et al. 2021/0127004. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 6. Kevan 2012/0123758 further teaches The method of claim 5, wherein the virtual agent operates as a simulated customer receiving simulated customer service by the first participant (Kevan 2012/0123758 [0024 - incident may be simulated in the three-dimensional virtual environment…] A critical incident may be simulated in the three-dimensional virtual environment and the user may be prompted to choose how the user avatar, which represents the user, should interact with the three-dimensional virtual environment to respond to the simulated critical incident.). Kevan 2012/0123758 may not expressly disclose the “virtual agent operates as a simulated customer receiving simulated customer service” features, however, Caron et al. 2021/0127004 teaches these features as follows (Caron et al. 2021/0127004 [0001 - simulated training of agents or customer service representatives (CSR), and more particularly to objectively presenting scenarios and measuring various aspects of CSR interaction in those scenarios] The present disclosure generally relates to simulated training of agents or customer service representatives (CSR), and more particularly to objectively presenting scenarios and measuring various aspects of CSR interaction in those scenarios. [0002 - human trainer may apply the same subjective characteristics for evaluating the various CSRs] Training of agents or customer service representatives (CSR) may be time-consuming and subjective. Different human trainers may impose personal styles, preferences, and biases during training resulting a varied training experience. Further, each human trainer may apply the same subjective characteristics for evaluating the various CSRs. Further, evaluation of other ‘soft-skills’ such as empathy and understanding, when evaluated by a human trainer, may also be evaluated based on subjective interpretations. Current human-based training and evaluation of CSRs has resulted in inconsistent training and inconsistent evaluation. [0013 - generating a simulated caller dialog for a scenario for testing a customer service representative (CSR), the simulated caller dialog including a caller intended issue specific to the scenario; a means for presenting at least a portion of the simulated caller dialog to the CSR, the portion including the caller intended issue; a means for receiving a CSR response to the at least the portion of the simulated caller dialog] Another general aspect includes a system including a means for generating a simulated caller dialog for a scenario for testing a customer service representative (CSR), the simulated caller dialog including a caller intended issue specific to the scenario; a means for presenting at least a portion of the simulated caller dialog to the CSR, the portion including the caller intended issue; a means for receiving a CSR response to the at least the portion of the simulated caller dialog, the CSR response including a CSR interpretation of the caller intended issue in the at least the portion of the simulated caller dialog; a means for generating an understanding determination result based on an intent determination recognition score generated by an intent determination recognition model, the understanding determination result indicating whether the CSR in the CSR response correctly or incorrectly identified the caller intended issue; a means for generating a CSR score for the scenario based on the understanding determination result; and a means for recording the CSR score in a database. [0023 - CSR trainee may interact with a device, such as a headset, earpiece, computer screen, or smartphone, to listen to the words used in a simulated caller conversation] The various objectively measured parameters include intent determination, emotional state based on facial recognition, empathy and sentiment keyword recognition/usage based on keyword analysis, and expected interaction-keyword usage analysis. The objective measurements occur in response to presenting a CSR trainee with audio of a simulated caller using one or more scenarios, and then capturing the CSR trainee's words and facial expressions for analysis. Specifically, the CSR trainee may interact with a device, such as a headset, earpiece, computer screen, or smartphone, to listen to the words used in a simulated caller conversation. The CSR trainee's response is then analyzed and the CSR trainee may be provided with suggestions for improving engagement with the simulated caller.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Caron et al. 2021/0127004. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 7 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Minert et al. 2009/0089153. 19/058,412 – Claim 7. Kevan 2012/0123758 further teaches The method of claim 1, wherein the completion of the first training session is defined by at least one criterion (Kevan 2012/0123758 [0029] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “completion of the first training session” features, however, Minert et al. 2009/0089153 teaches these features as follows (Minert et al. 2009/0089153 [0080 - defining a time parameter for completion of the training] The method further includes defining a time parameter for completion of the training (505). The time parameter may be defined by a user, or by software. The time parameter can include a target date for completion of a training session, a target number of days for completion of the training session, a target number of training sessions to be delivered during a time frame (e.g. a number per day), or may indicate that no time requirement is set at this time. For example, the time parameter can define a time parameter for a single company representative, a time parameter for a team of company representatives, a number of training session completions to be completed in a time period, or a time parameter for a group of selected company representatives.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Minert et al. 2009/0089153. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 8 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Snyder et al. 2004/0115596. 19/058,412 – Claim 8. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: scheduling, by the processor, a second training session of the plurality of training sessions for which the first training session is a prerequisite (Kevan 2012/0123758 [0028 - training session] As an example and not a limitation, a trainee may log onto a computer for entry into a particular training module. The trainee may choose the avatar that she wants to represent her. A selection of avatars from which to choose from may be provided to the user. For example, each avatar may be presented in a box and when the trainee clicks on the box to select one of the available avatars, the non-selected avatars will no longer be available for use during the training session. According to one embodiment, avatars representing emergency responders, health and safety officers, and teachers, for example, may also be pre-selected and assigned before training begins.). Kevan 2012/0123758 may not expressly disclose the “prerequisite” features, however, Snyder et al. 2004/0115596 teaches these features as follows (Snyder et al. 2004/0115596 [0076 - prerequisite modules associated with the subsequent module, to receive the student information, and to automatically produce a schedule of classes for teaching at least some of the modules…] a processor (12), adapted to receive the designations of the curriculum modules, to receive the curriculum information, to receive the designation for each subsequent module of the one or more prerequisite modules associated with the subsequent module, to receive the student information, and to automatically produce a schedule of classes for teaching at least some of the modules, such that each class includes at least one teacher and at least one student.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Snyder et al. 2004/0115596. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 9 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Geotgeluk et al. 2021/0366193. 19/058,412 – Claim 9. Kevan 2012/0123758 further teaches The method of claim 1, wherein the 3D VR environment is generated using a three-dimensional model based at least in part on photogrammetry (Kevan 2012/0123758 [0008] In one embodiment, a method of prompting a user of a computing device to choose how a user avatar should interact with a three-dimensional virtual environment in response to a critical incident includes displaying the three-dimensional virtual environment on a display device. The three-dimensional virtual environment graphically represents a physical environment. The method further includes displaying the user avatar that represents the user on a display device. The user avatar is positioned in the three-dimensional virtual environment. The method further includes simulating the critical incident in the three-dimensional virtual environment, and prompting the user to choose how the user avatar should interact with the three-dimensional virtual environment in response to the simulated critical incident.). Kevan 2012/0123758 may not expressly disclose the “photogrammetry” features, however, Geotgeluk et al. 2021/0366193 teaches these features as follows (Geotgeluk et al. 2021/0366193 [0015 - photogrammetry and similar techniques … make 3D-modeling and VR environment construction more expedient] Some recent approaches, such as photogrammetry, seek to improve upon this problem by capturing 3D scan data of real-world environments and then using the scan data as a basis for building a 3D model or representation the real-world environment. However, such approaches only alleviate a portion of the burden described above and remain fundamentally constrained by the same difficulties associated with the 3D-modeling process. In other words, photogrammetry and similar techniques may make 3D-modeling and VR environment construction more expedient, but they nevertheless still adhere to the traditional approach of using a plurality of textured and/or animated 3D objects.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Geotgeluk et al. 2021/0366193. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Armstrong 2022/0398652. 19/058,412 – Claim 10. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: receiving, by the virtual agent during the first training session, a natural language question from the first participant (Kevan 2012/0123758 [0026 - question and answer component may display a question to the user on a display device, prompt the user to input an answer to the question, and provide feedback to the user in response to the user's answer] A question and answer component of a critical incident response test may be incorporated into the training module 244. The question and answer component may display a question to the user on a display device, prompt the user to input an answer to the question, and provide feedback to the user in response to the user's answer.). Kevan 2012/0123758 may not expressly disclose the “natural language question and response” features, however, Armstrong 2022/0398652 teaches these features as follows (Armstrong 2022/0398652 [0024 - an artificial intelligence engine may generate one or more machine learning models trained to answer questions asked by a customer using the virtual marketplace platform … machine learning models may be trained to use natural language processing and training data to determine how to respond to a question or statement made by a customer] Further, the voice of the real person may be recorded and the virtual avatar may speak similarly as the real person. In some embodiments, an artificial intelligence engine may generate one or more machine learning models trained to answer questions asked by a customer using the virtual marketplace platform. The machine learning models may be trained to use natural language processing and training data to determine how to respond to a question or statement made by a customer. The machine learning models may continuously learn over time based on feedback provided by the customer whether the answer was satisfactory or not. Further, the machine learning models may learn over time based on the action performed by the customer over time. For example, if the customer asked for a certain product and the virtual avatar presents a product but the customer does not add the product to a virtual shopping cart, the machine learning model may be updated to provide a different product in a subsequent question. [0056 - machine learning models 132 may be trained to answer questions asked by customer virtual avatars and/or users of the virtual marketplace platform. The machine learning models 132 may be trained with training data including a corpus of labeled questions and a corpus of labeled answers. In some embodiments, the machine learning models 132 may perform natural language processing and/or sentiment analysis and/or tone analysis. The answers selected and/or the response selected by the machine learning models 132 may be determined based on the question, sentiment, and/or tone] The computing system 116 may include a training engine 130 capable of generating one or more machine learning models 132. Although depicted separately from the AI engine 140, the training engine 130 may, in some embodiments, be included in the AI engine 140 executing on the server 128. In some embodiments, the AI engine 140 may use the training engine 130 to generate the machine learning models 132 trained to perform inferencing operations, predicting operations, determining operations, controlling operations, or the like. The machine learning models 132 may be trained to answer questions asked by customer virtual avatars and/or users of the virtual marketplace platform. The machine learning models 132 may be trained with training data including a corpus of labeled questions and a corpus of labeled answers. In some embodiments, the machine learning models 132 may perform natural language processing and/or sentiment analysis and/or tone analysis. The answers selected and/or the response selected by the machine learning models 132 may be determined based on the question, sentiment, and/or tone. In some embodiments, the machine learning models 132 may be trained to generate statements to say to the customer virtual avatar based on the user's behavior, other users' behavior, the user's preferences, or the like. The one or more machine learning models 132 may be generated by the training engine 130 and may be implemented in computer instructions executable by one or more processing devices of the training engine 130 or the servers 128. To generate the one or more machine learning models 132, the training engine 130 may train the one or more machine learning models 132.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Armstrong 2022/0398652. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 11. Kevan 2012/0123758 further teaches The method of claim 10, further comprising: outputting, in the training room by the virtual agent during the first training session, a natural language response to the natural language question (Kevan 2012/0123758 [0026 - question and answer component may display a question to the user on a display device, prompt the user to input an answer to the question, and provide feedback to the user in response to the user's answer] A question and answer component of a critical incident response test may be incorporated into the training module 244. The question and answer component may display a question to the user on a display device, prompt the user to input an answer to the question, and provide feedback to the user in response to the user's answer.). Kevan 2012/0123758 may not expressly disclose the “natural language question and response” features, however, Armstrong 2022/0398652 teaches these features as follows (Armstrong 2022/0398652 [0024 - an artificial intelligence engine may generate one or more machine learning models trained to answer questions asked by a customer using the virtual marketplace platform … machine learning models may be trained to use natural language processing and training data to determine how to respond to a question or statement made by a customer] Further, the voice of the real person may be recorded and the virtual avatar may speak similarly as the real person. In some embodiments, an artificial intelligence engine may generate one or more machine learning models trained to answer questions asked by a customer using the virtual marketplace platform. The machine learning models may be trained to use natural language processing and training data to determine how to respond to a question or statement made by a customer. The machine learning models may continuously learn over time based on feedback provided by the customer whether the answer was satisfactory or not. Further, the machine learning models may learn over time based on the action performed by the customer over time. For example, if the customer asked for a certain product and the virtual avatar presents a product but the customer does not add the product to a virtual shopping cart, the machine learning model may be updated to provide a different product in a subsequent question. [0056 - machine learning models 132 may be trained to answer questions asked by customer virtual avatars and/or users of the virtual marketplace platform. The machine learning models 132 may be trained with training data including a corpus of labeled questions and a corpus of labeled answers. In some embodiments, the machine learning models 132 may perform natural language processing and/or sentiment analysis and/or tone analysis. The answers selected and/or the response selected by the machine learning models 132 may be determined based on the question, sentiment, and/or tone] The computing system 116 may include a training engine 130 capable of generating one or more machine learning models 132. Although depicted separately from the AI engine 140, the training engine 130 may, in some embodiments, be included in the AI engine 140 executing on the server 128. In some embodiments, the AI engine 140 may use the training engine 130 to generate the machine learning models 132 trained to perform inferencing operations, predicting operations, determining operations, controlling operations, or the like. The machine learning models 132 may be trained to answer questions asked by customer virtual avatars and/or users of the virtual marketplace platform. The machine learning models 132 may be trained with training data including a corpus of labeled questions and a corpus of labeled answers. In some embodiments, the machine learning models 132 may perform natural language processing and/or sentiment analysis and/or tone analysis. The answers selected and/or the response selected by the machine learning models 132 may be determined based on the question, sentiment, and/or tone. In some embodiments, the machine learning models 132 may be trained to generate statements to say to the customer virtual avatar based on the user's behavior, other users' behavior, the user's preferences, or the like. The one or more machine learning models 132 may be generated by the training engine 130 and may be implemented in computer instructions executable by one or more processing devices of the training engine 130 or the servers 128. To generate the one or more machine learning models 132, the training engine 130 may train the one or more machine learning models 132.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Armstrong 2022/0398652. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Snyder et al. 2004/0115596. 19/058,412 – Claim 12. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: determining, by the processor, a second training session of the plurality of training sessions for which the first training session is a prerequisite (Kevan 2012/0123758 [0028 - training session] As an example and not a limitation, a trainee may log onto a computer for entry into a particular training module. The trainee may choose the avatar that she wants to represent her. A selection of avatars from which to choose from may be provided to the user. For example, each avatar may be presented in a box and when the trainee clicks on the box to select one of the available avatars, the non-selected avatars will no longer be available for use during the training session. According to one embodiment, avatars representing emergency responders, health and safety officers, and teachers, for example, may also be pre-selected and assigned before training begins.); generating, by the processor based on the completion of the first training session, a training room for the second training session (Kevan 2012/0123758 [0028 - training session] As an example and not a limitation, a trainee may log onto a computer for entry into a particular training module. The trainee may choose the avatar that she wants to represent her. A selection of avatars from which to choose from may be provided to the user. For example, each avatar may be presented in a box and when the trainee clicks on the box to select one of the available avatars, the non-selected avatars will no longer be available for use during the training session. According to one embodiment, avatars representing emergency responders, health and safety officers, and teachers, for example, may also be pre-selected and assigned before training begins.); and causing display (Kevan 2012/0123758 [0029 - user may proceed through a training sequence until all stages of the training sequence have been completed] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice. [0033 - multiple users remotely located from one another may participate in a training scenario at the same time … each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees … the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees] As illustrated by the training system of FIG. 2, multiple users remotely located from one another may participate in a training scenario at the same time. For example, each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees. Further, the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees.), by the processor in the three dimensional virtual reality environment on the first and second participant devices, the training room for the second training session, the first avatar, the second avatar, and the virtual agent (Kevan 2012/0123758 [0021 - physical environment represented by the three-dimensional virtual environment may include physical environments such as, for example, a business, an airport, a train station, a subway station, a bus station, a university, a college, a school, a portion of or an entire city, or any other physical environment in which critical incident response may be required … building designs may provide locations within the virtual environment such as offices, classrooms, storerooms, hallways, stairwells, elevators, bathrooms, meeting rooms, food preparation areas] The training module 244 may be configured so that a three-dimensional virtual environment that is displayed on a display device, such as a monitor, of the trainee computing device 102. The displayed three-dimensional virtual environment may be a graphical representation of a physical environment in which the user of the trainee computing device 102 is to be trained to respond to critical incidents. The physical environment represented by the three-dimensional virtual environment may include physical environments such as, for example, a business, an airport, a train station, a subway station, a bus station, a university, a college, a school, a portion of or an entire city, or any other physical environment in which critical incident response may be required. The training module 244 may include one or more buildings or building designs (e.g., Computer-aided drafting drawings) customized to the physical environment in which the user may be exposed to and/or required to respond to a critical incident at such location. The user or an administrator may input customized information pertaining to the actual physical environment, including the number of buildings, the type of buildings, the floor plans of the buildings, the layout of rooms and hallways in the buildings, etc. The input information may be substantially replicated in the virtual environment. The buildings may form a campus setting, an airport, a train station or depot, a subway station or depot, a bus station or depot, at least a portion of or an entire city, a restaurant, or a military facility, among other settings. Customizing the virtual environment to the physical environment likely to be encountered by a trainee may enhance the effectiveness of the training. The building designs may provide locations within the virtual environment such as offices, classrooms, storerooms, hallways, stairwells, elevators, bathrooms, meeting rooms, food preparation areas, and any other space allocations where a critical incident may occur.). Kevan 2012/0123758 may not expressly disclose the “prerequisite” features, however, Snyder et al. 2004/0115596 teaches these features as follows (Snyder et al. 2004/0115596 [0076 - prerequisite modules associated with the subsequent module, to receive the student information, and to automatically produce a schedule of classes for teaching at least some of the modules…] a processor (12), adapted to receive the designations of the curriculum modules, to receive the curriculum information, to receive the designation for each subsequent module of the one or more prerequisite modules associated with the subsequent module, to receive the student information, and to automatically produce a schedule of classes for teaching at least some of the modules, such that each class includes at least one teacher and at least one student.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Snyder et al. 2004/0115596. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 13. Kevan 2012/0123758 further teaches The method of claim 12, further comprising: determining, by the processor, completion of the second training session; and sending, by the processor to the first and second participant devices, a respective digital certificate for completion of the second training session (Kevan 2012/0123758 [0029 - user may proceed through a training sequence until all stages of the training sequence have been completed] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice. [0033 - multiple users remotely located from one another may participate in a training scenario at the same time … each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees … the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees] As illustrated by the training system of FIG. 2, multiple users remotely located from one another may participate in a training scenario at the same time. For example, each user of a participating trainee computing device may have an associated user avatar that is present in and interacts with a three-dimensional virtual environment common to all of the participating trainees. Further, the training system of FIG. 2 may allow actual local emergency support personnel, such as firemen and police, to participate in training because the local emergency support personnel could simply utilize a trainee computing device that would generate an associated avatar that would be present in the three-dimensional virtual environment experienced by all trainees.). Kevan 2012/0123758 may not expressly disclose the “digital certificate” features, however, Vestemean 2022/0094739 teaches (Vestemean 2022/0094739 [0272 - digital certificate] In one embodiment, the digital certificate template 35 includes, but is not limited to, certificates of attendance and/or completion and/or achievement and/or licensing, comprising: Continuing Education Certificates, Certificates of Accomplishment, Certificates of Achievement, Certificates of Attendance, Certificates of Completion, Diplomas (e.g., grade school, middle school, high-school, university, etc.), Leadership Awards, Membership Certificates, Sports Awards, Sales Awards, Training Program Certification, Sponsor Acknowledgements, Licenses, Professional Licenses (e.g., accounting, aviation, building contractor, engineering, legal, medical, real-estate, etc.), Military (e.g., rank, award ribbons (e.g., purple heart, combat infantry badge, etc.), etc. However the present invention is not limited to these embodiments and other embodiments and other digital certificate templates can be used to practice the invention.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Vestemean 2022/0094739. One of ordinary skill in the art would have been motivated to do so to utilize well known digital certificate features which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 14 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Stone et al. 2020/0097081. 19/058,412 – Claim 14. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: outputting, by the processor in the 3D VR environment, a navigation menu (Kevan 2012/0123758 [0008 - displaying the user avatar that represents the user on a display device … user avatar is positioned in the three-dimensional virtual environment] In one embodiment, a method of prompting a user of a computing device to choose how a user avatar should interact with a three-dimensional virtual environment in response to a critical incident includes displaying the three-dimensional virtual environment on a display device. The three-dimensional virtual environment graphically represents a physical environment. The method further includes displaying the user avatar that represents the user on a display device. The user avatar is positioned in the three-dimensional virtual environment. The method further includes simulating the critical incident in the three-dimensional virtual environment, and prompting the user to choose how the user avatar should interact with the three-dimensional virtual environment in response to the simulated critical incident.). Kevan 2012/0123758 may not expressly disclose the “navigation menu” features, however, Stone et al. 2020/0097081 teaches these features as follows (Stone et al. 2020/0097081 [0190] In some embodiments, a control signal may include a signal for controlling a display of content provided by the AR system, such as by controlling the display of navigation menus and/or other content presented in a user interface displayed in an AR environment provided by the AR system.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Stone et al. 2020/0097081. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Yamamoto et al. 2020/0050256. 19/058,412 – Claim 15. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: outputting, by the processor in the 3D VR environment, a selectable element associated with a video (Kevan 2012/0123758 [0029 - virtual environment training may be initiated by the user clicking on a start button] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “selectable element associated with a video” features, however, Yamamoto et al. 2020/0050256 teaches these features as follows (Yamamoto et al. 2020/0050256 [0022] Some example mode selection buttons (or icons) 13 are shown, such as, for example: a relax mode button 15 to cause VR system 11 to provide a relaxing VR environment; a flight simulator mode button 16 to cause a flight simulator VR game or experience to be provided to the user; a beach mode button 17 to cause VR system 11 to provide a beach VR environment to the user, e.g., including a 3D video of a beach, with ocean waves, sea gulls, etc., and/or sounds of the ocean; a focus mode button 18 that may cause VR system 11 to provide a user with a VR environment that may be useful in allowing a user to focus, e.g., a specific type of video and/or music; a work mode button 19 that may cause VR system 11 to provide a VR environment that may assist with a user in working, e.g., a specific audio and/or video for this purpose; a camping mode button 20 that may cause the VR system 11 to provide a camping VR environment, e.g., with a video of a camping scene with tents, a campfire, trees, crickets chirping, etc.; a rafting (e.g., white water rafting) mode button 21 to cause the VR system to provide a rafting VR environment (e.g., a whitewater rafting VR game), e.g., including video of rafting down a river, and showing other rafts ahead of the user's raft; a beach volleyball mode button 22 to cause the VR system 11 to allow the user to play a beach volleyball VR game or experience or view a beach volleyball game; a heat mode button 23 which a user may press (e.g., if the user is cold) to cause a heating (or a warm or at least warmer) VR environment to be provided by VR system 11 to the user, e.g. clouds part, and the sun comes out as part of the 3D video presented by VR system 11 to the user, e.g., to provide the impression of heating or a warming environment; and, a cool mode button 24 which a user may press or select (e.g., if the user is hot, and wants to cool down) to cause a cooling (or a cool/cooler) VR environment to be provided by VR system 11 to the user, e.g., clouds cover the sun, a snow or mountain scene with wind and snow blowing, is displayed by VR system 11 as part of the 3D video, to provide the impression of cooling or a cooler environment, for example. These are just some example modes and mode selection buttons 13 (or icons or GUI elements) that may be selected for the VR system 11, and other modes or mode selection buttons may be provided. In this manner, VR system 11 may perform changes in a virtual environment in response to the selection of one of a plurality of modes (e.g., user modes). [0026] According to a first illustrative example implementation, VR system 11 may detect that one of the mode selection buttons 13 have been pressed or selected (to select a specific user mode). In response to detecting the selection of a mode selection button 13 (or in response to receiving a selection of a mode), the VR system 11 may perform one or more changes in the virtual environment based on the selected mode, e.g., by displaying a 3D video and/or outputting audio information to provide a VR environment to the user based on or in accordance with the selected mode. At the same time, or shortly after receiving the mode selection or after detecting the selection or pressing of a mode selection button 13, VR system 11 may also send a control signal (e.g., a mode indication signal, sent via wireless interface 12 and wireless interface 30) to the processor 29 (or other processor or controller) to indicate the selected mode to cause processor 29 to perform a change within the physical environment based on or in accordance with the selected mode.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Yamamoto et al. 2020/0050256. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). 19/058,412 – Claim 16. Kevan 2012/0123758 further teaches The method of claim 15, further comprising: receiving, by the processor, input selecting the selectable element associated with the video (Kevan 2012/0123758 [0029 - virtual environment training may be initiated by the user clicking on a start button] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “selectable element associated with a video” features, however, Yamamoto et al. 2020/0050256 teaches these features as follows (Yamamoto et al. 2020/0050256 [0022] Some example mode selection buttons (or icons) 13 are shown, such as, for example: a relax mode button 15 to cause VR system 11 to provide a relaxing VR environment; a flight simulator mode button 16 to cause a flight simulator VR game or experience to be provided to the user; a beach mode button 17 to cause VR system 11 to provide a beach VR environment to the user, e.g., including a 3D video of a beach, with ocean waves, sea gulls, etc., and/or sounds of the ocean; a focus mode button 18 that may cause VR system 11 to provide a user with a VR environment that may be useful in allowing a user to focus, e.g., a specific type of video and/or music; a work mode button 19 that may cause VR system 11 to provide a VR environment that may assist with a user in working, e.g., a specific audio and/or video for this purpose; a camping mode button 20 that may cause the VR system 11 to provide a camping VR environment, e.g., with a video of a camping scene with tents, a campfire, trees, crickets chirping, etc.; a rafting (e.g., white water rafting) mode button 21 to cause the VR system to provide a rafting VR environment (e.g., a whitewater rafting VR game), e.g., including video of rafting down a river, and showing other rafts ahead of the user's raft; a beach volleyball mode button 22 to cause the VR system 11 to allow the user to play a beach volleyball VR game or experience or view a beach volleyball game; a heat mode button 23 which a user may press (e.g., if the user is cold) to cause a heating (or a warm or at least warmer) VR environment to be provided by VR system 11 to the user, e.g. clouds part, and the sun comes out as part of the 3D video presented by VR system 11 to the user, e.g., to provide the impression of heating or a warming environment; and, a cool mode button 24 which a user may press or select (e.g., if the user is hot, and wants to cool down) to cause a cooling (or a cool/cooler) VR environment to be provided by VR system 11 to the user, e.g., clouds cover the sun, a snow or mountain scene with wind and snow blowing, is displayed by VR system 11 as part of the 3D video, to provide the impression of cooling or a cooler environment, for example. These are just some example modes and mode selection buttons 13 (or icons or GUI elements) that may be selected for the VR system 11, and other modes or mode selection buttons may be provided. In this manner, VR system 11 may perform changes in a virtual environment in response to the selection of one of a plurality of modes (e.g., user modes). [0026] According to a first illustrative example implementation, VR system 11 may detect that one of the mode selection buttons 13 have been pressed or selected (to select a specific user mode). In response to detecting the selection of a mode selection button 13 (or in response to receiving a selection of a mode), the VR system 11 may perform one or more changes in the virtual environment based on the selected mode, e.g., by displaying a 3D video and/or outputting audio information to provide a VR environment to the user based on or in accordance with the selected mode. At the same time, or shortly after receiving the mode selection or after detecting the selection or pressing of a mode selection button 13, VR system 11 may also send a control signal (e.g., a mode indication signal, sent via wireless interface 12 and wireless interface 30) to the processor 29 (or other processor or controller) to indicate the selected mode to cause processor 29 to perform a change within the physical environment based on or in accordance with the selected mode.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Yamamoto et al. 2020/0050256. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 17 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Yamamoto et al. 2020/0050256; in view of Stauber et al. 2023/0093979. 19/058,412 – Claim 17. Kevan 2012/0123758 further teaches The method of claim 16, further comprising: initiating, by the processor, playback of the video in the 3D VR environment based on the input selecting the selectable element associated with the video (Kevan 2012/0123758 [0029 - virtual environment training may be initiated by the user clicking on a start button] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “playback” features, however, Stauber et al. 2023/0093979 teaches these features as follows (Stauber et al. 2023/0093979 [0146] In some embodiments, such as in FIG. 7A, while presenting (e.g., visual presentation, audio presentation, etc.) a content item in a first mode of presentation, the electronic device (e.g., 101) displays (802a), via the display generation component (e.g., 120), a user interface (e.g., 704) of an application associated with the content item in a three-dimensional environment (e.g., 702) (e.g., such as the user interface of the application described with reference to method 1000). In some embodiments, displaying the user interface of the application in the three-dimensional environment while presenting the content item in the first mode of presentation includes displaying the three-dimensional environment without a respective virtual lighting effect based on the content item (e.g., that will be displayed in response to receiving an input corresponding to a request to transition from a first mode of presentation to a second mode of transportation). In some embodiments, the content item includes audio content, such as music, a podcast, or audiobook. In some embodiments, the content item includes video content, such as a movie, video clip, or episode in a series of episodic content. In some embodiments, the user interface of the application associated with the content item includes an image associated with the content item (e.g., album artwork), one or more selectable options for modifying playback of the content item (e.g., play/pause, skip ahead, skip back), and one or more options for modifying the presentation mode of the content item. In some embodiments, the application associated with the content item is a content (e.g., browsing, streaming, playback, library, sharing) application. In some embodiments, the three-dimensional environment includes virtual objects, such as application windows, operating system elements, representations of other users, and/or content items and/or representations of physical objects or regions in the physical environment of the electronic device. In some embodiments, the representations of physical objects or regions are displayed in the three-dimensional environment via the display generation component (e.g., virtual or video passthrough). In some embodiments, the representations of physical objects or regions are views of the physical objects or regions in the physical environment of the electronic device visible through a transparent portion of the display generation component (e.g., true or real passthrough). In some embodiments, the electronic device displays the three-dimensional environment from the viewpoint of the user at a location in the three-dimensional environment corresponding to the physical location of the electronic device and/or the user in the physical environment of the electronic device. In some embodiments, the three-dimensional environment is generated, displayed, or otherwise caused to be viewable by the device (e.g., a computer-generated reality (XR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc.). In some embodiments, presenting the content item in the first mode includes presenting the content item without applying a lighting effect based on the content item, as will be described in more detail below. [0213] In some embodiments, such as in FIG. 9A, the electronic device (e.g., 101) displays (1002a), via the display generation component (e.g., 120), a user interface (e.g., 902, 904, 906) of an application (e.g., such as the user interface of the application described with reference to method 800 or a different user interface of the same application associated with the user interface described with reference to method 800) in a three-dimensional environment (e.g., 901) that includes a plurality of interactive elements (e.g., 918a, 910a, 914a), wherein the plurality of interactive elements (e.g., 918a, 910a, 914a) are responsive to (e.g., are configured to perform a respective operation in response to detecting) inputs including a transition of a respective portion of a user (e.g., 903a) of the electronic device (e.g., 101) from a first pose to a second pose. In some embodiments, the three-dimensional environment includes virtual objects, such as application windows, operating system elements, representations of other users, and/or content items and/or representations of physical objects or regions in the physical environment of the electronic device. In some embodiments, the representations of physical objects or regions are displayed in the three-dimensional environment via the display generation component (e.g., virtual or video passthrough). In some embodiments, the representations of physical objects or regions are views of the physical objects or regions in the physical environment of the electronic device visible through a transparent portion of the display generation component (e.g., true or real passthrough). In some embodiments, the electronic device displays the three-dimensional environment from the viewpoint of the user at a location in the three-dimensional environment corresponding to the physical location of the electronic device and/or the user in the physical environment of the electronic device. In some embodiments, the three-dimensional environment is generated, displayed, or otherwise caused to be viewable by the device (e.g., a computer-generated reality (XR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc.). In some embodiments, the user interface includes a plurality of interactive user interface elements that, when selected, cause the electronic device to perform respective functions, such as navigating to different pages of the user interface, initiating or modifying playback of content items associated with the application of the user interface, performing other actions with respect to content items associated with the application of the user interface, initiating communication with one or more other electronic devices, and/or changing a setting of the application or electronic device. In some embodiments, the application is a content (e.g., streaming, playback, library, sharing) application. In some embodiments, the user interface of the application is an extended user interface of the content application and the user interface described above with reference to method 800 is a scaled down and/or miniature user interface associated with the content application (e.g., a miniplayer user interface of the content application).). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Stauber et al. 2023/0093979. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Claim 18 rejected under 35 U.S.C. 103 as being unpatentable over: Kevan 2012/0123758; in view of Yerli 2021/0201588; in further view of Vestemean 2022/0094739; in view of Toyoda et al. 2018/0322783. 19/058,412 – Claim 18. Kevan 2012/0123758 further teaches The method of claim 1, further comprising: outputting, by the processor in the 3D VR environment, a graphical element associated with a predetermined location (Kevan 2012/0123758 [0029 - virtual environment training may be initiated by the user clicking on a start button] The virtual environment training may be initiated by the user clicking on a start button. When the training begins, the trainee's avatar may be put in a simulated critical incident response scenario and given choices of how to proceed. The training module 244 may evaluate the user's choice to determine if the choice is correct in the simulated critical incident. A correct choice may lead the user to another scenario. An incorrect choice may result in the trainee computing device 102 displaying the consequences of the choice in the three-dimensional virtual environment. The user may proceed through a training sequence until all stages of the training sequence have been completed. In addition, in one illustrative embodiment, the training module 244 may time how long it takes the user to make his/her one or more choices in response to the simulated critical incident. The training module 244 may also provide feedback and correlated consequences to both the choice and the time it took for the user to make that choice.). Kevan 2012/0123758 may not expressly disclose the “graphical element associated with a predetermined location” features, however, Toyoda et al. 2018/0322783 teaches these features as follows (Toyoda et al. 2018/0322783 [0006-0008][0006] In one embodiment, an engagement system for inducing awareness in a driver about a surrounding environment of a vehicle using an augmented reality (AR) system within the vehicle is disclosed. The engagement system includes one or more processors with a memory communicably coupled to the one or more processors. The memory stores a monitoring module including instructions that when executed by the one or more processors cause the one or more processors to identify one or more potential hazards to the vehicle in the surrounding environment from sensor data collected from at least one sensor of the vehicle. The memory stores an engagement module including instructions that when executed by the one or more processors cause the one or more processors to render, within the AR system, a display scenario about the one or more potential hazards by displaying one or more graphical elements that correlate with locations of the one or more potential hazards in the surrounding environment. [0007] A non-transitory computer-readable medium for inducing awareness in a driver about a surrounding environment of a vehicle using an augmented reality (AR) system within the vehicle. The non-transitory computer-readable medium storing instructions that when executed by one or more processors cause the one or more processors to identify one or more potential hazards to the vehicle in the surrounding environment from sensor data collected from at least one sensor of the vehicle. The instructions include instructions to render, within the AR system, a display scenario about the one or more potential hazards by displaying one or more graphical elements that correlate with locations of the one or more potential hazards in the surrounding environment. [0008] A method for inducing awareness in a driver about a surrounding environment of a vehicle using an augmented reality (AR) system within the vehicle. The method includes identifying one or more potential hazards to the vehicle in the surrounding environment from sensor data collected from at least one sensor of the vehicle. The method includes rendering, within the AR system, a display scenario about the one or more potential hazards by displaying one or more graphical elements that correlate with locations of the one or more potential hazards in the surrounding environment.). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to have modified Kevan 2012/0123758 to include the features as taught by Toyoda et al. 2018/0322783. One of ordinary skill in the art would have been motivated to do so to utilize well known features and tools useful to implementing collaborative training in a graphically simulated virtual reality (VR) environment which should prove to improve user experience, maximize profits, and optimize revenue (i.e., improve user experience). Examiner’s Response to Arguments Per Applicants’ amendments/arguments, the rejections are withdrawn. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection. Applicants’ amendments have necessitated the new grounds of rejection noted above. Examiner’s Response: Claim Rejections – 35 USC §112 Per Applicants’ amendments/arguments, the rejections are withdrawn. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection. Applicants’ amendments have necessitated the new grounds of rejection noted above. Examiner’s Response: Claim Rejections – 35 USC §101 Per Applicants’ amendments/arguments, the rejections are withdrawn. See notes above for additional reasoning and rationale for dropping 35 USC 101 rejection including Applicant’s amendments, arguments, lack of abstract idea, and practical integration. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection. Applicants’ amendments have necessitated the new grounds of rejection noted above. Regarding Claims 1-15, on page(s) 6-12 of Applicant’s Remarks (dated 12/27/2016), Applicants traverse the 35 USC §101 rejections arguing the following: Examiner’s Response: Claim Rejections – 35 USC § 102 / § 103 Per Applicants’ amendments/arguments, the rejections are withdrawn. See notes above for additional reasoning and rationale for dropping prior-art rejection including Applicant’s amendments and arguments and unique combination of features and elements not taught by the prior-art without hindsight reasoning. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection. Applicants’ amendments have necessitated the new grounds of rejection noted above. Regarding Claim X, on page(s) 8-9 of Applicant’s Remarks / After Final Amendments (dated 07/15/2011), Applicant(s) argues that the cited reference(s) (Ellis and Vandermolen) fails to teach, describe, or suggest the amended features. Specifically, Applicant(s) argues that cited reference(s) do not teach, describe, or suggest the following: . With respect, Applicant’s arguments are deemed unpersuasive and the amended feature(s) remain rejected as follows. With respect, Applicant’s arguments are deemed unpersuasive and the amended feature(s) remain rejected as follows. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion PERTINENT PRIOR ART – Patent Literature The prior-art made of record and considered pertinent to applicant's disclosure. Donderici 2023/0386138 [0014 - virtual activities include virtual social activities (e.g., communications (e.g., chat, messaging, document sharing, etc.) with other people who also engage in the virtual environment), virtual entertainment activities (e.g., engaging in a digital interactive experience), virtual educational activities (e.g., participating in a training session held in the virtual environment)] Geri et al. 2020/0038119 [0027 - facilitating training and collaboration in a virtual environment. The system enables multiple users, including an instructor and participants, to interact with various types of content in a virtual environment in real time] Allen et al. 2022/0207818 [0003 - Virtual reality environments allow for training and certification of users and operators in environments] Bauer 2011/0072367 - The method involves providing an avatar representing a medical professional e.g. doctor, and a patient, providing an appointment room, and allowing the medical professional to access medical information e.g. drug allergy, through a virtual environment. Visual information regarding a patient's condition is depicted in the virtual environment. Voice communication is established between the patient and the medical professional through the virtual environment. A predictive visual display relating patient's health and treatment is presented based on the medical information. Johnson et al. 2021/0264810 [0002 - virtual reality authoring system for generating a virtual reality (VR) training session] Anderson et al. 2021/0072947 [0045 - instructor 101 can see 2D video display 330 and 3D widget 310 of learner room 110, learner 102 can see synchronous avatar 101A of instructor 101, and instructor 101 and learner 102 can both see annotations made in shared 3D virtual space ] Goyal et al. 2023/0244877 [0018 - agent for processing a chat conversation, a management interface for configuring virtual agents, and a Natural Language Understanding (NLU) training engine for training utterances to predict matching intents] PERTINENT PRIOR ART – Non-Patent Literature (NPL) The NPL prior-art made of record and considered pertinent to applicant's disclosure. THIS ACTION IS MADE FINAL Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. THIS ACTION IS MADE FINAL Applicant’s amendment necessitated new grounds of rejection and FINAL Rejection. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW T. SITTNER whose telephone number is (571) 270-7137 and email: matthew.sittner@uspto.gov. The examiner can normally be reached on Monday-Friday, 8:00am - 5:00pm (Mountain Time Zone). Please schedule interview requests via email: matthew.sittner@uspto.gov If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah M. Monfeldt can be reached on (571) 270-1833. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW T SITTNER/ Primary Examiner, Art Unit 3629b
Read full office action

Prosecution Timeline

Feb 20, 2025
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §102, §103
Mar 18, 2026
Applicant Interview (Telephonic)
Mar 18, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596996
SYSTEMS AND METHODS FOR PROVIDING DYNAMIC REPRESENTATION OF ASSETS IN A FACILITY
2y 5m to grant Granted Apr 07, 2026
Patent 12591843
SCALABLE AND EFFICIENT PACKAGE DELIVERY USING TRANSPORTER FLEET
2y 5m to grant Granted Mar 31, 2026
Patent 12572962
CUSTOMER SERVING ASSISTANCE APPARATUS, CUSTOMER SERVING ASSISTANCE METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12572992
SYSTEMS AND METHODS FOR AUTOMATED BUILDING CODE CONFORMANCE
2y 5m to grant Granted Mar 10, 2026
Patent 12565335
DETERMINING PART UTILIZATION BY MACHINES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+56.2%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 890 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month