Prosecution Insights
Last updated: April 19, 2026
Application No. 18/050,456

Methods and Systems for a Conflict Resolution Simulator

Non-Final OA §101§103§112
Filed
Oct 27, 2022
Examiner
LANE, DANIEL E
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Smarter Reality LLC
OA Round
5 (Non-Final)
4%
Grant Probability
At Risk
5-6
OA Rounds
3y 5m
To Grant
13%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
12 granted / 290 resolved
-65.9% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
332
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
19.2%
-20.8% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
29.7%
-10.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 04 March 2026 has been entered. This a response to Applicant’s amendment filed on 04 March 2026, wherein: Claims 1, 13, and 20 are amended. Claims 2, 4-9, 11, 14, and 16-19 are original. Claims 3, 10, 12, and 15 are previously presented. Claims 1-20 are pending. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994) The disclosure of the prior-filed application, Application No. 63/315,828, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. In particular, the disclosure of the prior-filed application fails to provide sufficient written description for “each scenario of the plurality of scenarios comprises an interaction associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic;… receiving a verbal input associated with a dialog option in the scenario spoken by the user from the computing device; converting the verbal input to a textual representation; performing natural language processing on the textual representation to generate a natural language understanding result; determining a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the AI virtual companion; configuring customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; controlling, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the AI virtual companion enacting the response; receiving, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; training one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options; and based on the characteristic of the user, modifying, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device” in claims 1, 13, and 20, “determining the response to the verbal input based on at least one of the following: an attitude of the user, conversational choices of the user, the behavioral characteristic of the Al virtual companion, and background information of the scenario” in claims 2 and 14, “based on the characteristic, modifying the visual content associated with the scenario being rendered on the display of the computing device” in claim 10, “wherein the sensor is a wearable device, a camera, a device located proximate the user, a device included in the computing device, or some combination thereof” in claim 11, and “wherein the characteristic comprises a vital sign, a physiological state, a heartrate, a blood pressure, a temperature, a perspiration rate, or some combination thereof” in claim 12 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). In particular, the specification of the prior-filed application, at best, merely recites similar language as the claims without providing any substantive description for the claimed limitations identified above for the same reasons that the instant specification also fails as identified in the rejections of the claims under 35 USC 112(a) below for the same claim limitations. Thus, pending claims 1-20 do not gain benefit of priority to US Provisional Application 63/315,828. Therefore, pending claims 1-20 have an effective filing date of 27 October 2022. Claim Objections Claims 12-19 are objected to because of the following informalities: The status identifier of claim 12 is labeled as “Previously Amended” while other previously amended claims are labeled as “Previously Presented”. Uniformity is recommended. 37 CFR 1.121 recites that “Previously Presented” is an acceptable identifier. The amendments to claims 13-19 include at least one amendment that is difficult to perceive (i.e., manner of adding a single character) and does not follow MPEP guidance for making such amendments. See MPEP 714(II)(C) which recites “extra portions of text may be included before and after text being deleted, all in strike-through, followed by including and underlining the extra text with the desired change (e.g., number 14 as).” Continuing such amendments risks denial of entry of amendments via a PTOL-324 Notice of Non-Compliant Amendment. Appropriate correction is required. Claim Rejections - 35 USC § 112 The text of those sections of Title 35, U.S. Code 112(a) not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claims 1, 13, and 20, the originally filed disclosure is silent regarding “a verbal input associated with a dialog option in the scenario spoken by the user from the computing device”. The specification only recites “verbal input associated with the scenario spoken by the user from the computing device”. It is particularly silent regarding verbal input associated with a dialog option. Furthermore, the disclosure is silent regarding the term “dialog”. It only recites “dialogue”, but is silent regarding verbal input associated with a dialogue option. While one interpretation is that “dialog” is the same as “dialogue”, “dialog” can also reasonably be interpreted to have a different meaning from “dialogue”. However, the specification is silent regarding “dialog” and thus does not provide any clarification. Furthermore, the specification does recite “dialogue options” and “dialogue choices”, but is silent regarding verbal input being associated with a dialogue option or choice. See, for example, para. 41, 50, 52, 54, 55, 58, 63, and 64 of the specification. Thus, “verbal input associated with a dialog option” is new matter. Dependent claims 2-12 and 14-19 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Further regarding claims 1, 13, and 20, the disclosure fails to provide sufficient written description for “each scenario of the plurality of scenarios comprises an interaction associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic;… converting the verbal input to a textual representation; performing natural language processing on the textual representation to generate a natural language understanding result; determining a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the AI virtual companion; configuring customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; controlling, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the AI virtual companion enacting the response; receiving, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; training one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options; and based on the characteristic of the user, modifying, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP 2161.01(I). The disclosure merely recites that this is performed in results-based language without describing the algorithm or steps/procedure taken to perform the function with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See, for example, at least 32, 34, 35, 38-47, 51, 65, 66, 69-72, 79, 82, 94, and 106. For instance, regarding speech to text, para. 71 merely recites that a non-descript speech to text component “may use one or more speech to text techniques to process the speech audio data. For example, models in speech recognition may be divided into an acoustic model and a language model. The acoustic model may solve the problem of turning sound signals into some kind of phonetic representation. The language model may house the domain knowledge of words, grammar, and sentence structure for the language. These conceptual models can be implemented with probabilistic models (e.g., Hidden Markov models, Deep Neural Network models, etc.,) using machine learning algorithms.” Furthermore, it is unclear how the “conceptual models can be implemented with probabilistic models (e.g., Hidden Markov models, Deep Neural Network models, etc.) using machine learning algorithms” since the provided examples of probabilistic models are themselves machine learning algorithms. Regarding natural language processing, para. 72 merely recites that a non-descript natural language processing component “may use natural language processing (NLP), data mining, and pattern recognition technologies to process the text equivalent to generate a natural language understanding result. More specifically, natural language processing component 804 may use different AI technologies to understand language, translate content between languages, recognize elements in speech, and perform sentiment analysis. For example, natural language processing component 804 may use NLP and data mining and pattern recognition technologies to collect and process information provided in different information resources. Additionally, natural language processing component 804 may use natural language understanding (NLU) techniques to process unstructured data using text analytics to extract entities, relationships, keywords, semantic roles, and so forth.” Regarding determining a response of the AI virtual companion, para. 79 provides some examples of resultant responses and para. 41-47 generically describe basic operations of using artificial intelligence to determine a response without providing any parameters of the AI model (claimed as an AI engine) used. Thus, the disclosure is silent regarding any substantive details of any algorithm used to perform the claimed operations. Furthermore, regarding the newly added language “based on… a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user”, the disclosure is silent such language, and particularly silent regarding “based on a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario”. The closest language is found in para. 66 which recites “The AI virtual companion's response may be controlled by an expert AI system (e.g., AI engine 140 in FIG. 1) which balances several factors such as the user's attitude (e.g., friendly, angry, etc.,) and conversational choices, the virtual companion's characteristics, and background of the scenario to influence the AI virtual companion's behavior and the course of the conversation.” In short, there is no “physical location” where an AI virtual companion interacts with the user, let alone a physical interaction where an AI virtual companion interacts with the user in a virtual reality environment. Such language is contradictory. The AI virtual companion is only disclosed to interact with the user in a virtual reality environment. Thus, this includes new matter in addition to being insufficiently described. Dependent claims 2-12 and 14-19 inherit the deficiencies of their respective parent claims, and thus are rejected under the same rationale. Further regarding claims 1, 13, and 20, the originally filed disclosure is silent regarding “train one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting of the scenario based on the dialog options”. The closest language is found in para. 43 of the specification which only recites “[t]he training engine 130 may use a base data set of user selections and scenario states and outputs pertaining to resulting of the scenario based on the user selections. In some embodiments, the base data set may refer to training data and the training data may include labels and rules that specify certain outputs occur when certain inputs are received.” Furthermore, as identified above, the disclosure is silent regarding “dialog option”. Thus, “wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting of the scenario based on the dialog options” is new matter. Dependent claims 2-12 and 14-19 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Regarding claims 2 and 14, the disclosure fails to provide sufficient written description for “determining the response to the verbal input based on at least one of the following: an attitude of the user, conversational choices of the user, the behavioral characteristic of the Al virtual companion, and background information of the scenario” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP 2161.01(I). The disclosure merely recites that this limitation is performed in results-based language without providing the steps, calculations, or algorithms necessary to perform the claimed limitation so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See, for example, at least para. 32, 35, 51, 66, 67, 79, 84, 94, and 106. Regarding claims 10-12, the disclosure fails to provide sufficient written description for “based on the characteristic, modifying the visual content associated with the scenario being rendered on the display of the computing device” in claim 10, “wherein the sensor is a wearable device, a camera, a device located proximate the user, a device included in the computing device, or some combination thereof” in claim 11, and “wherein the characteristic comprises a heartrate, a blood pressure, a temperature, a perspiration rate, or some combination thereof” in claim 12 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP 2161.01(I). The disclosure merely recites that this limitation is performed in results-based language without providing the steps, calculations, or algorithms necessary to perform the claimed limitation so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. For instance, the most disclosure is found in para. 65 which examples that a wearable device may be a watch, a necklace, an anklet, a ring, a belt, etc. However, none of these “wearable devices” inherently include a sensing device for any measurement pertaining to a user. Similarly, the disclosure is silent regarding what “a device located proximate the user” or “a device included in the computing device” entails as an element of a sensor. Dependent claims 11 and 12 inherit the deficiencies of their respective parent claims, and thus are rejected under the same rationale. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code 101 not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself. Step 1 The instant claims are directed to products and a method which falls under the four statutory categories (STEP 1: YES). Step 2A, Prong 1 Independent claim 1 recites: A method for using dialogue simulations for training, the method comprising: providing a user interface to display on a display of a computing device of a user, the user interface presenting a plurality of scenarios, each scenario of the plurality of scenarios comprises an interaction associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic; receiving a selection of a scenario of the plurality of scenarios from the computing device; receiving a verbal input associated with a dialog option in the scenario spoken by the user from the computing device; converting the verbal input to a textual representation; performing natural language processing on the textual representation to generate a natural language understanding result; determining a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the Al virtual companion; configuring customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; controlling, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the Al virtual companion enacting the response; receiving, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; training one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options; and based on the characteristic of the user, modifying, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device. Independent claim 13 recites: A tangible, non-transitory computer-readable medium storing instructions that, when executed, cause a processing device to: provide a user interface to display on a display of a computing device of a user, the user interface presenting a plurality of scenarios, each scenario of the plurality of scenarios comprises an interaction associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic; receive a selection of a scenario of the plurality of scenarios from the computing device; receive a verbal input associated with a dialog option in the scenario spoken by the user from the computing device; convert the verbal input to a textual representation; perform natural language processing on the textual representation to generate a natural language understanding result; determine a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction between the AI virtual companion and the user occurred in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the Al virtual companion; configure customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; control, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the Al virtual companion enacting the response; receive, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; train one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options; and based on the characteristic of the user, modify, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device. Independent claim 20 recites: A system comprising: a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: provide a user interface to display on a display of a computing device of a user, the user interface presenting a plurality of scenarios, each scenario of the plurality of scenarios associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic; receive a selection of a scenario of the plurality of scenarios from the computing device; receive a verbal input associated with a dialog option in the scenario spoken by the user from the computing device; convert the verbal input to a textual representation; perform natural language processing on the textual representation to generate a natural language understanding result; determine a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction between the AI virtual companion and the user occurred in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the Al virtual companion; configure customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; control, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the Al virtual companion enacting the response; receive, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; train one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting of the scenario based on the dialog options; and based on the characteristic of the user, modify, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device. All of the foregoing underlined elements identified above amount to the abstract idea grouping of a certain method of organizing human activity because they amount to managing personal behavior or interactions between people (including social activities, teaching, and following rules or instructions). Additionally, the receiving, determining, and enacting steps identified above are interpreted as a series of steps that could reasonably be performed by mental processes with the aid of pen and paper because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Lastly, the training one or more machine learning models step amounts to the abstract idea grouping of mathematical concepts because it recite mathematical calculations as defined in MPEP 2106.04(a)(2)(I) which recites that a “claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the ‘mathematical concepts’ grouping” because a “mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word ‘calculating’ in order to be considered a mathematical calculation. For example, a step of ‘determining’ a variable or number using mathematical methods or ‘performing’ a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation." The dependent claims, except claim 11 which is addressed under Step 2A, Prong 2 and Step 2B, amount to merely further defining the judicial exception. Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES). Step 2A, Prong 2 This judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements. The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: provide a user interface to display on a display of a computing device (claims 1, 13, and 20), an artificially intelligent (AI) virtual companion (claims 1, 13, and 20), performing natural language processing on the textual representation to generate a natural language understanding result (claims 1, 13, and 20), a virtual reality environment (claims 1, 13, and 20), an AI engine (claims 1, 13, and 20, a sensor (claims 1, 13, and 20), one or more machine learning models of the AI engine (claims 1, 13, and 20), identifying the sensor as a wearable device, a camera, a device located proximate the user, a device included in the computing device, or some combination thereof (claim 11), a tangible, non-transitory computer-readable medium storing instructions, that, when executed, cause a processing device to perform the method (claim 13), and a system comprising a memory device storing instructions and a processing device communicatively coupled to the memory device (claim 20). This is evidenced by the manner in which these elements are disclosed in the drawings and the instant specification. For example, Fig. 1 and 14 merely illustrate systems as collections of non-descript black boxes and stock icons, while at least para. 5-7, 27, 29, 30, 35-47, 59, 60, 64-73, 87-92, and 113 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement. Furthermore, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. For example, para. 35 recites that “technical problems may include providing virtual simulations of scenarios based on user input (e.g., speech, gesture, vital signs, etc.), and real-time control of the AI virtual companion in response to the user input” and that the “technical solution may include receiving the user input via one or more input peripherals (e.g., microphone, vibration sensor, pressure sensor, camera, etc.) and use speech-to-text conversion and natural language processing techniques to transform the speech to text and to use one or more machine learning models trained to input the text and output a meaning of the text.” However, the disclosure only recites the mere use of speech-to-text, natural language processing, a non-descript AI engine, and one or more machine learning models in results-based language evidencing that the judicial exception is not implemented with a particular machine or manufacture, and thus is not providing any technical solution to a technical problem particularly when the mere use of speech-to-text, natural language processing, an AI engine, and one or more machine learning models are common features for computerizing simulations1. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Again, this is evidenced by the disclosure of the mere use of speech-to-text, natural language processing, a non-descript AI engine, and one or more machine learning models in results-based language. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the drawings and specification as identified above. Additionally, the mere use of a sensor for one or more measurements pertaining to the user is merely adding insignificant extra-solution data-gathering activity to the judicial exception (e.g., mere data gathering in conjunction with a law of nature or abstract idea). Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the additional elements does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (STEP 2A, PRONG 2: NO). Step 2B The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed system and the process it performs does not require the use of a particular machine, nor does it result in the transformation of an article. Although the claims recite computer components, identified above, for performing at least some of the recited functions, these elements are recited at a high level of generality in a conventional arrangement for performing their basic computer functions (i.e., receiving, processing, transmitting, outputting data). This is evidenced by the manner in which these elements are disclosed in the instant specification. For example, Fig. 1 and 14 merely illustrate systems as collections of non-descript black boxes and stock icons, while at least para. 5-7, 27, 29, 30, 35-47, 59, 60, 64-73, 87-92, and 113 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement. Furthermore, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. For example, para. 35 recites that “technical problems may include providing virtual simulations of scenarios based on user input (e.g., speech, gesture, vital signs, etc.), and real-time control of the AI virtual companion in response to the user input” and that the “technical solution may include receiving the user input via one or more input peripherals (e.g., microphone, vibration sensor, pressure sensor, camera, etc.) and use speech-to-text conversion and natural language processing techniques to transform the speech to text and to use one or more machine learning models trained to input the text and output a meaning of the text.” However, the disclosure only recites the mere use of speech-to-text, natural language processing, a non-descript AI engine, and one or more machine learning models in results-based language evidencing that the judicial exception is not implemented with a particular machine or manufacture, and thus is not providing any technical solution to a technical problem particularly when the elements of the asserted technical problem, both individually and as a whole, and the mere use of speech-to-text, natural language processing, an AI engine, and one or more machine learning models are all common features for computerizing roleplay/scenario-based training, especially with the use of a chatbot. Additionally, the mere use of a sensor for one or more measurements pertaining to the user is merely adding insignificant extra-solution data gathering activity to the judicial exception. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Again, this is evidenced by the disclosure of the mere use of speech-to-text, natural language processing, a non-descript AI engine, and one or more machine learning models in results-based language. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the instant specification as identified above. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO). Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code 103 not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen (US 2008/0254425) in view of Bosman et al.2 (hereinafter referred to as Bosman) and Beaumont et al. (US 2021/0174933, hereinafter referred to as Beaumont). Regarding claims 1, 13, and 20, Cohen teaches a method for using dialogue simulations for training (claim 1), a tangible, non-transitory computer-readable medium storing instructions (claim13), and a system (claim 20), the method comprising: providing a user interface to display on a display of a computing device of a user, the user interface presenting a plurality of scenarios comprises an interaction (Cohen, para. 103, “Certain example embodiments provide users with a level of control with respect to what training materials the user will use in order to obtain a desired set/type of knowledge (e.g., in the cognitive domain). For example, the user optionally can select a training module and/or a portion or chunk of a training module for a training session.”), each scenario of the plurality of scenarios associated with dialogue with an artificially intelligent (AI) virtual companion pertaining to a training topic (Cohen, para. 14, “A scenario including real or animated actors is optionally presented, simulating an interaction. The training system presents related queries for the trainee who responds (e.g., audibly responds using role playing language or otherwise).” Para. 29, “Certain embodiments optionally utilize ‘purpose built’ modules which focus on the most or relatively more important and prioritized relevant concepts to organizations, groups, teams and/or individual users. This enhances focused attention through the use of relevant, to-the-point scenarios.” Para. 208, “an example embodiment includes ‘artificial intelligence’ with respect to the question and answer flows.”); receiving a selection of a scenario of the plurality of scenarios from the computing device (Cohen, para. 103, “the user optionally can select a training module and/or a portion or chunk of a training module for a training session.”); receiving a verbal input associated with a dialog option in the scenario spoken by the user from the computing device (Cohen, para. 16, “certain example embodiments optionally utilize ‘unprompted’ real world verbal answering by a trainee”); converting the verbal input to a textual representation (Cohen, para. 121, “The user is queried regarding the material to be tested (e.g., via verbal questions provided by a trainer, an avatar presented by the training system, a recording of a trainer, a speech to text system, and/or text) and is asked to verbally respond, (e.g., without having a selection of answers presented from which the user is to choose).”); controlling, using an AI engine, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the Al virtual companion enacting the response (Cohen, para. 113, “the scenario includes a single avatar speaking directly to the trainee as if having a conversation with the trainee.”). Cohen does not explicitly teach performing natural language processing on the textual representation to generate a natural language understanding result; determining, based on the natural language understanding result, a location of the interaction between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, a response to the verbal input, the response including a dialogistic component and a behavioral characteristic of the Al virtual companion; configuring customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism; receiving, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user; and based on the characteristic, modifying, using the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device. However, in an analogous art, Bosman teaches performing natural language processing on the textual representation to generate a natural language understanding result (Bosman, Fig. 1, Speech Recognition followed by Natural Language Understanding); determining a response to the verbal input, wherein the response is determined based on the natural language understanding result, a physical location where the interaction occurred between the AI virtual companion and the user in a virtual reality environment representing the scenario, a context associated with the interaction, and an attitude of the user, and wherein the response including a dialogistic component and a behavioral characteristic of the Al virtual companion (Bosman, Fig. 1 illustrates this.); configuring customizable behavioral characteristics of the AI virtual companion, wherein the customizable behavioral characteristics comprise at least an attitude and a mannerism (Bosman, Fig. 1, Renderer; pg. 77 in reference to Fig. 1, “the four rectangles on the right hand side of the figure are about generating the agent’s output. Here, the two modules on the right deal with non-verbal information (e.g., displaying facial expressions on the agent’s face) and the modules on the left with verbal information (i.e., determining what the agent says).”); controlling, using an AI engine and based on the customizable behavioral characteristics of the AI virtual companion, visual content associated with the scenario being rendered on the display of the computing device by rendering a representation of the Al virtual companion enacting the response (Bosman, Fig. 1, Renderer; pg. 77 in reference to Fig. 1, “the four rectangles on the right hand side of the figure are about generating the agent’s output. Here, the two modules on the right deal with non-verbal information (e.g., displaying facial expressions on the agent’s face) and the modules on the left with verbal information (i.e., determining what the agent says).”); receiving, from a sensor, one or more measurements pertaining to the user, wherein the one or more measurements are received during the scenario, and the one or more measurements indicate a characteristic of the user (Bosman, Fig. 1, Audio-Visual Sensing followed by Nonverbal Behavior Understanding, Speech Recognition followed by Natural Language Understanding; pg. 77 in reference to Fig. 1); based on the characteristic of the user, modifying, using the one or more machine learning models of the AI engine, a subsequent dialogistic component enacted by the AI virtual companion being rendered on the display of the computing device (Bosman, Fig. 1, Natural Language Generation followed by Speech Generation, Nonverbal Behavior Generation followed by Behavior Realization, Renderer; para. 77 in reference to Fig. 1, “the four rectangles on the right hand side of the figure are about generating the agent’s output. Here, the two modules on the right deal with non-verbal information (e.g., displaying facial expressions on the agent’s face) and the modules on the left with verbal information (i.e., determining what the agent says).”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the verbal and nonverbal human-virtual agent interaction steps identified in Bosman in the human-virtual agent interaction processes of Cohen because Fig. 1 in Bosman is a summary of the general steps in the state of the art regarding the human-virtual agent interaction process, and in particular the state of the art for human-virtual agent interactions with respect to training interpersonal communication skills. In other words, Bosman is identifying in Fig. 1 that the claimed steps are ubiquitous to the state of the art. Cohen and Bosman do not explicitly teach training one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options. However, in a related art, Bennett teaches training one or more machine learning models of the AI engine, wherein the training uses training data comprising a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options (Beaumont, Fig. 1B, Select reference metric(s) and model 170, Train model 180; para. 34, “training the at least one computation model using the one or more reference subject attributes”; para. 115, “At step 170 a combination of the reference metrics and a generic computational model are selected, with the reference metrics and identified social-emotional skill state for a plurality of reference subjects being used to train the model at step 180. The nature of the model and the training performed can be of any appropriate form and could include any one or more of decision tree learning, random forest, logistic regression, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, genetic algorithms, rule-based machine learning, learning classifier systems, K-means clustering, Naive Bayes Classifier Algorithms, Nearest Neighbor learning, or the like. As such schemes are known, these will not be described in any further detail.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for Cohen and Bosman to include machine learning model training as taught by Beaumont because model training is the essential step in machine learning that determines how well the model(s) will work when used. This is directly influenced by the algorithm used (model selection in step 170 of Beaumont) and the quality and type of training data for end-use case (reference metric(s) selection in step 170 of Beaumont). One of ordinary skill in the art recognizes that each model is trained using a training data set applicable to the use case, thus it is at least obvious, if not inherent, that the claimed training data would comprise a base data set of dialog options and scenario states and outputs pertaining to resulting states of the scenario based on the dialog options because it has been held to within the general skill of a worker in the art to select a known material on the basis of its suitability for the intended use as a matter obvious design choice. Regarding claims 2 and 14, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, further comprising determining the response to the verbal input based on at least one of the following: an attitude of the user, conversational choices of the user (Cohen, para. 47, “The data can be data present via data screens (e.g., using data obtained for a customer account database/CRM system), information gathered (e.g., gathered via a verbal communication with a customer or prospect, a family member, a student, an employee,) during needs/opportunity analysis and/or other conversational engagements with customers, prospects, and/or internal personnel, as well as in educational, consumer and/or healthcare settings, among many others.”), the behavioral characteristic of the Al virtual companion (Cohen, para. 40, “A "bank" of avatars is optionally provided for users ( e.g., training implementers) to select from and insert into modules on a module-by-module basis. Thus, issues of multi-culturalism, diversity and global/regional uniqueness are resolved.”), and background information of the scenario (Cohen, para. 451, “the case history can present the context or background regarding the scenario about to be presented and/or information regarding one or more scenario participants.”). Regarding claims 3 and 15, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, wherein the training topic is related to one of the following topics: diversity and inclusion, and leadership (It is noted that identifying a training topic is nonfunctional descriptive material that does not distinguish the claims from the prior art. Regardless, Cohen teaches this at least at para. 15, “Certain embodiments and teachings disclosed herein can be utilized with respect to various fields where interpersonal/knowledge proficiency applies. These fields include, but are not limited to, some or all of the following: business situations and employment ‘families’ (e.g., interactions between employees and customers/prospects, internal clients, managers and employees, and managers with other managers regarding sales, service, leadership/management/coaching, administration, etc.), educational situations, such as those where fluency of knowledge is important, consumer situations (e.g., counseling relationships, marketing, sales, etc.), family relationships, healthcare situations, etc.”). Regarding claims 4 and 16, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, further comprising providing background information on the training topic (Cohen, para. 451, “the case history can present the context or background regarding the scenario about to be presented and/or information regarding one or more scenario participants.”). Regarding claims 5 and 17, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, further comprising providing a user interface configured to allow adjustment of the behavioral characteristic of the Al virtual companion (Cohen, para. 36, “training modules can be rapidly created, and can be adapted and modified, allowing for continuous module development and continuous improvement of the content. This enables lessons from real world deployment to be used to appropriately modify existing modules. This also allows for a constantly fresh and new experience as part of a continuous improvement program.”). Regarding claims 6 and 18, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, further comprising providing a user interface configured to allow adjustment of the dialogistic component of the Al virtual companion (Cohen, para. 36, “training modules can be rapidly created, and can be adapted and modified, allowing for continuous module development and continuous improvement of the content. This enables lessons from real world deployment to be used to appropriately modify existing modules. This also allows for a constantly fresh and new experience as part of a continuous improvement program.”). Regarding claims 7 and 19, Cohen in view of Bosman teaches the method of claim 1 and the computer-readable medium of claim 13, further comprising providing a user interface configured to allow the user to playback the scenario and review one or more selections made by the user during the scenario (Cohen, para. 209, “In order to objectively demonstrate to the trainee that the trainee failed to identify an opportunity, a missed opportunity, an error, or a correct action, optionally objective evidence is provided to the trainee by replaying the corresponding segment to prove what occurred in the segment (e.g., in response to a manual instruction by the trainee or trainer, such as by optionally activating a scenario replay button).”). Regarding claim 8, Cohen in view of Bosman teaches the method of claim 1, further comprising providing a user interface configured to allow the user to review one or more selections of other users for the scenario (Cohen, para.32, “certain embodiments can be used individually and/or in combination with others, creating a solo mode, a multi-person mode, or a mode that combines solo and two or more person. Thus, certain embodiments can be used ‘solo’ for rehearsal and self-drilling, and with others for interactive drilling and testing.” Para. 123, “the user can be provided with the scores of others for the segment and/or the user ranking relative to other users.” Para. 262, “The facilitator can compare the learner's response with pre-programmed answers stored on the system and displayed to the facilitator.” The facilitator is as user reviewing one or more selections of another user(s) (i.e., the learner(s)) for the scenario.). Regarding claim 9, Cohen in view of Bosman teaches the method of claim 1, further comprising providing a user interface configured to allow the user to create dialogue and one or more outcomes for a new scenario (Cohen, para. 36, “training modules can be rapidly created, and can be adapted and modified, allowing for continuous module development and continuous improvement of the content. This enables lessons from real world deployment to be used to appropriately modify existing modules. This also allows for a constantly fresh and new experience as part of a continuous improvement program.”). Regarding claim 10, Cohen in view of Bosman teaches the method of claim 1. Cohen does not explicitly teach based on the characteristic, modifying the visual content associated with the scenario being rendered on the display of the computing device. However, Bosman teaches based on the characteristic, modifying the visual content associated with the scenario being rendered on the display of the computing device (Bosman, Fig. 1, Natural Language Generation followed by Speech Generation, Nonverbal Behavior Generation followed by Behavior Realization, Renderer; para. 77 in reference to Fig. 1, “the four rectangles on the right hand side of the figure are about generating the agent’s output. Here, the two modules on the right deal with non-verbal information (e.g., displaying facial expressions on the agent’s face) and the modules on the left with verbal information (i.e., determining what the agent says).” Non-verbal output is visual content associated with the scenario.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the verbal and nonverbal human-virtual agent interaction steps identified in Bosman in the human-virtual agent interaction processes of Cohen because Fig. 1 in Bosman is a summary of the general steps in the state of the art regarding the human-virtual agent interaction process, and in particular the state of the art for human-virtual agent interactions with respect to training interpersonal communication skills. In other words, Bosman is identifying in Fig. 1 that the claimed steps are ubiquitous to the state of the art. Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen in view of Bosman as applied to claim 1 above, and further in view of Horseman et al. (US 2017/0162072, hereinafter referred to as Horseman). Regarding claims 11 and 12, Cohen in view of Bosman teaches the method of claim 10. Cohen does not explicitly teach wherein the sensor is a wearable device, a camera, a device located proximate the user, a device included in the computing device, or some combination thereof in claim 11 or wherein the characteristic comprises a heartrate, a blood pressure, a temperature, a perspiration rate, or some combination thereof in claim 12. However, in an analogous art, Horseman teaches wherein the sensor is a wearable device, a camera, a device located proximate the user, a device included in the computing device, or some combination thereof in claim 11 (Horseman, Fig. 2, GSR Sensor 202, Facial Recognition Sensor 208, Blood Glucose Sensor 204, Blood Pressure Sensor 206, Respiration Sensor 210, Neural Sensor 212, Heart Rate Sensor 214; Fig. 6 illustrates a user wearing various sensors of the training station of Fig. 2 as well as camera 605 which is in a computing device 122 located proximate the user) or wherein the characteristic comprises a heartrate, a blood pressure, a temperature, a perspiration rate, or some combination thereof in claim 12 (Horseman, Fig. 13, Heart Rate - Respiratory Rate - Brain Signal - Skin Conductance - Facial Data - Blood Pressure - Blood Glucose 1330, Engagement - Alertness - Excitement - Stress - Emotion - Interest 1340.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the biofeedback of Horseman into the interactive training of Cohen because “[b]y providing feedback through an avatar, the skills and competencies that are being trained through the system 100 are better internalized by the user 126 such that training is more efficient and effective” and “[b]y improving user alertness/engagement, for example, the training provided by the training system may be more effective at causing skills and lessons to be internalized by users.” See Horseman at para. 130 and 155, respectively. Response to Arguments Applicant's arguments against the claim objections have been fully considered but they are not persuasive. In pg. 8, Applicant asserts the claims have not been amended to add a single character. Examiner is not persuaded. Applicant is directed to at least the amendments of claim 13 which include placing in bold and possibly underlining (it is unclear) a comma following the term “modify” in the last limitation of the claim. Applicant's arguments against the rejections of the claims under 35 USC 112(b) have been fully considered. The amendments obviate the associated rejections. Thus, these rejections are withdrawn. Applicant's arguments against the rejections of the claims under 35 USC 112(a) have been fully considered but they are not persuasive. In pg. 19-26, Applicant asserts that (1) para. 35 and 69-71 support the converting and performing natural language processing steps, (2) para. 32, 39, 41-45, 66, and 82 support the amended determining step, and (3) para. 35, 65, and 72 support the amended controlling, receiving and modifying steps in claim 1. Examiner is not persuaded. These paragraphs are explicitly identified in the rejection as insufficient. It is particularly noted that Applicant’s emphasized part of a sentence in para. 32 of the specification merely recites that the AI virtual companion may respond based on the user’s attitude, provides an unbounded list of examples of an attitude, but is silent regarding how an attitude is determined and, more importantly, is silent regarding how an AI virtual companion is configured to respond based on the user’s attitude. In other words, it merely recites that this is performed in results-based language, but is silent regarding the steps, calculations, and algorithms necessary to perform the claimed functionality. The same is said for the emphasized part of para. 82 of the specification. Applicant then asserts that para. 59, 73, and 89 provide support that the specification fully discloses information for implementing and performing the recitations of the claims. Examiner is not persuaded. Para. 59 merely summaries the insufficiently disclosed method illustrated in Fig. 7 and recites that it may be performed by processing logic that may include hardware, software, or a combination of both. Para. 73 merely summarizes the insufficiently disclosed method illustrated in Fig. 9 and recites that it may be performed by processing logic that may include hardware, software, or a combination of both. Para. 89 merely recites that processing device 1402 can be any one or more processing devices. In short, none of these paragraphs provide any meaningful disclosure for implementing and performing the recitations of the claims. Regarding claims 2 and 14, Applicant recites para. 66 and 67 of the specification and asserts that the written description is satisfied because the specification is written in a manner that permits one skilled in the art to reasonably conclude that the inventor possessed the claimed invention. Examiner is not persuaded. Para. 66 and 67 are explicitly identified as insufficient in the rejection. Regarding claims 10-12, Applicant recites para. 65 of the specification and asserts that the written description is satisfied because the specification is written in a manner that permits one skilled in the art to reasonably conclude that the inventor possessed the claimed invention. Examiner is not persuaded. Para. 65 is explicitly identified as insufficient in the rejection. Applicant's arguments against the rejections of the claims under 35 USC 101 have been fully considered but they are not persuasive. In pg. 14, Applicant asserts that the amended independent claims are similar to the claims in DesJardins because the current recitations relate to improving machine learning models by training the machine learning models. Examiner is not persuaded. The pending claims are unrelated to those found patent-eligible in DesJardins. In particular, claim 1 in DesJardins specifically identifies that the claims are directed to a method of training a machine learning model. In contrast, the instant amended independent claims include a new limitation that recites, in results-based language, that one or more machine learning models are trained and broadly recites the contents of a training data set while the disclosure only provides generic descriptions of the training itself. Thus, this does not integrate the judicial exception into a practical application but rather is wholly directed to at least the abstract idea grouping of mathematical concepts. See at least Example 47 of the Office’s July 2024 Subject Matter Eligibility Update provided by the Office to further understanding. In Example 47, claim 2 is found patent ineligible and claim 3 is found patent eligible wherein both claims 2 and 3 include the same single limitation reciting training an artificial neural network and identifies this limitation as directed to the abstract idea grouping of mathematical concepts. Example claim 3 integrates the judicial exception into a practical application by improving network security using the information from the detection to enhance security by taking proactive measures to remediate the danger by detecting the source address associated with the potentially malicious packets. This amounts to an improvement in the technical field of network intrusion detection. In contrast, the mere use of computer technology is merely an attempt to link the judicial exception to a particular technological environment, but does not result in an improvement to technology or computer functions employed. In pg. 14-15, Applicant asserts that techniques recited in para. 35 of the specification cannot feasibly be performed in the human mind nor are they methods of organizing human activity. Examiner is not persuaded. As identified in the rejection, controlling visual content associated with the scenario being rendered on the display by rendering the representation of the companion enacting the response is encompassed by the judicial exception. Merely providing that the display is of the computing device and that the companion is an AI virtual companion are additional elements that are identified in the rejection to neither integrate the judicial exception into a practical application nor add significantly more. Furthermore, para. 35 of the specification is explicitly addressed in the rejection - the disclosure only recites the mere use of speech-to-text, natural language processing, a non-descript AI engine, and one or more machine learning models in results-based language evidencing that the judicial exception is not implemented with a particular machine or manufacture, and thus is not providing any technical solution to a technical problem particularly when the mere use of speech-to-text, natural language processing, an Al engine, and one or more machine learning models are common features for computerizing simulations at least as evidenced by Bosman. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Again, this is evidenced by the disclosure of the mere use of speech-to-text, natural language processing, a non-descript Al engine, and one or more machine learning models in results-based language. In pg. 15-17, Applicant asserts that controlling visual content by rendering a representation of an Al virtual companion enacting a response is rooted in computer technology and cannot be performed by the human mind, nor is it a method of organizing activity. Examiner is not persuaded. As identified in the rejection, controlling visual content associated with the scenario being rendered on the display by rendering the representation of the companion enacting the response is encompassed by the judicial exception. Merely providing that the display is of the computing device and that the companion is an Al virtual companion are additional elements that are identified in the rejection to neither integrate the judicial exception into a practical application nor add significantly more. It is further noted that the newly claimed training one or more machine learning models is directed at least to the abstract idea grouping of mathematical concepts as identified above. In pg. 17-18, Applicant points to para. 66 of the specification and asserts that the aspects of the claimed technology provide improvements and enhancements over conventional systems. Examiner is not persuaded. The elements excerpted from para. 66 merely recite basic elements for providing a computer-based interactive simulation and thus act to link the judicial exception to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. Simulating and animating the virtual companion appropriately to a scenario and interactions between the user and the AI virtual companion are claimed and disclosed to be nothing more than conventional at least as evidenced by the cited prior art. Applicant's arguments against the rejections of the claims under 35 USC 103 have been fully considered but they are not persuasive. In pg. 9-13, Applicant asserts that the cited prior art do not teach the newly added limitations to claim 1. Examiner is not persuaded. Applicant is directed to the rejections above which have been updated to address the amendments to the claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LANE whose telephone number is (303)297-4311. The examiner can normally be reached Monday - Friday 8:00 - 4:30 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL LANE/Examiner, Art Unit 3715 1 Bosman, K., Bosse, T., & Formolo, D. (2019). Virtual agents for professional social skills training: An overview of the state-of-the-art. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 75–84. https://doi.org/10.1007/978-3-030-16447-8_8 2 Bosman, K., Bosse, T., & Formolo, D. (2019). Virtual agents for professional social skills training: An overview of the state-of-the-art. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 75–84. https://doi.org/10.1007/978-3-030-16447-8_8
Read full office action

Prosecution Timeline

Oct 27, 2022
Application Filed
Sep 26, 2023
Non-Final Rejection — §101, §103, §112
Apr 05, 2024
Response Filed
Jul 16, 2024
Final Rejection — §101, §103, §112
Jan 27, 2025
Request for Continued Examination
Jan 28, 2025
Response after Non-Final Action
Feb 05, 2025
Non-Final Rejection — §101, §103, §112
Jul 14, 2025
Response Filed
Aug 30, 2025
Final Rejection — §101, §103, §112
Mar 04, 2026
Response after Non-Final Action
Mar 04, 2026
Request for Continued Examination
Mar 19, 2026
Non-Final Rejection — §101, §103, §112
Mar 23, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 11810474
SYSTEMS AND METHODS FOR NEURAL PATHWAYS CREATION/REINFORCEMENT BY NEURAL DETECTION WITH VIRTUAL FEEDBACK
2y 5m to grant Granted Nov 07, 2023
Patent 11398160
SYSTEM, APPARATUS, AND METHOD FOR EDUCATING AND REDUCING STRESS FOR PATIENTS WITH ILLNESS OR TRAUMA USING AN INTERACTIVE LOCATION-AWARE TOY AND A DISTRIBUTED SENSOR NETWORK
2y 5m to grant Granted Jul 26, 2022
Patent 11250723
VISUOSPATIAL DISORDERS DETECTION IN DEMENTIA USING A COMPUTER-GENERATED ENVIRONMENT BASED ON VOTING APPROACH OF MACHINE LEARNING ALGORITHMS
2y 5m to grant Granted Feb 15, 2022
Patent 11210961
SYSTEMS AND METHODS FOR NEURAL PATHWAYS CREATION/REINFORCEMENT BY NEURAL DETECTION WITH VIRTUAL FEEDBACK
2y 5m to grant Granted Dec 28, 2021
Patent 11004551
SLEEP IMPROVEMENT SYSTEM, AND SLEEP IMPROVEMENT METHOD USING SAID SYSTEM
2y 5m to grant Granted May 11, 2021
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
4%
Grant Probability
13%
With Interview (+8.7%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month