Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: Does the claimed invention fall inside one of the four statutory categories (process, machine, manufacture, or composition of matter)? Yes for claims 1-24. Claims 1-12 are drawn to a method of bidirectional communication in a virtual reality environment (i.e., a process). Claims 13-24 are drawn to a system for bidirectional communication in a virtual reality environment (i.e., a manufacture).
Step 2A - Prong One: Do the claims recite a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon)? Yes, for claims 1-24.
Claim 1 recites:
A method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations;
receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations;
in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station;
receiving, by the electronic processor via the network, a communication from the first user in a target spoken language;
providing, by the electronic processor, the communication to a first artificial intelligence model;
receiving, from the first artificial intelligence model, a response to the communication in the target spoken language;
outputting, via the first avatar, the response in the target spoken language;
and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user.
These steps amount to a form of mental process and organizing human activity (i.e., an abstract idea) because a human can guide students in a language learning station by acting as a facilitator who checks in with individuals or small groups to measure progress, provide tailored support, and build rapport. Human guides (teachers or paraprofessionals) can scaffold vocabulary, model strategies, and foster interaction. The claimed invention discloses “Speaking practice in various real-life situations and access to a personal tutor have been among the least addressed needs of language learners.” [0017].
Independent claim 13 describes steps that are similar to steps of claim 1 (and therefore recite limitations that fall within this subject matter of grouping abstract ideas), and these claims are therefore determined to recite an abstract idea under the same analysis. Dependent claims 2-12 and 14-24 are directed towards mini-tasks (assigning a characteristic, providing conversation topics, translating sentences, etc.) for a method and a system for bidirectional communication in a virtual reality environment. Each claim amounts to a form of collecting, generating, and analyzing information, and therefore falls within the scope of a method for organizing human activity, (i.e., an abstract idea). As such, the Examiner concludes that claims 2-12 and 14-24 recite an abstract idea.
Step 2A – Prong Two: Do the claims recite additional elements that integrate the exception into a practical application of the exception? No
In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “additional element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception.
The requirement to execute the claimed steps/functions using an electronic processor, a network, and a memory (independent claims 1 and 13 and dependent claims 2-12 and 14-24) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer.
Similarly, the limitations of an electronic processor, a network, and a memory (independent claims 1 and 13 and dependent claims 2-12 and 14-24) are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(f)).
Use of a computer, processor, memory or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015) (See MPEP 2106.05(f)).
Further, the additional limitations beyond the abstract idea identified above, serve merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, they serve to limit the application of the abstract idea to a computerized environment (e.g., identifying and displaying, etc.) performed by a computing device, processor, and memory, etc. This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application (see MPEP 2106.05(h)).
Dependent claims 2-12 and 14-24 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims are further part of the abstract idea as identified by the Examiner for each respective independent claim (i.e., they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? i.e., Are there any additional elements (features/limitations/step) recited in the claim beyond the abstract idea? No
In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an “inventive concept.” An “inventive concept” is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amount to significantly more than the judicial exception itself. Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966).
As discussed above in “Step 2A – Prong Two”, the identified additional elements in independent claims 1 and 13 and dependent claims 2-12 and 14-24 are equivalent to adding the words “apply it” on a generic computer, and/or generally link the use of the judicial exception to a particular technological environment or field of use. Therefore, the claims as a whole do not amount to significantly more than the judicial exception itself.
Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer or/and append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity) and/or simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception.
Dependent claims 2-12 and 14-24 fail to include any additional elements. In other words, each of the limitations/elements recited in respective independent claims are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that no additional element, or combination of additional claims elements are sufficient to ensure the claims amount to significantly more than the abstract idea identified above. Therefore, claims 1-24 are not eligible subject matter under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-24 are rejected under 35 U.S.C. 103 as being unpatentable under US 20180137778 A1 (“Ichihashi”) in view of US 20140302464 A1 (“Cai”) and US 20180268728 A1 (“Burdis”).
In regards to claim 1, Ichihashi discloses the following limitations with the exception of the underlined limitations.
A method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations ([0048], “the processing means … classifies each learner into a plurality of learning levels …, sends an animation in order to provide an virtual reality environment for a plurality of learners”);
receiving, by the electronic processor ([0033], “The processing means … works by a central processing unit (CPU (a processor))”) via a network, a station selection user input to select a first language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station;
receiving, by the electronic processor via the network, a communication from the first user in a target spoken language ([0045], “the voice transmission … sends the model voice to the learner terminal …, the voice transmission … has a means that sends a new model voice spoken by the … speaker”);
providing, by the electronic processor, the communication to a first artificial intelligence model;
receiving, from the first artificial intelligence model, a response to the communication in the target spoken language;
outputting, via the first avatar, the response in the target spoken language;
and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in ([0048], “a learner interaction … sends an animation in order to provide an virtual reality environment for a plurality of learners”) a natural spoken language of the first user ([0045], “voice transmission means … sends a … voice spoken by the … speaker”).
Cai discloses
in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”);
outputting, via the first avatar, the response in the target spoken language ([0055], “the response message may be provided by a user of a … language learning client associated with the avatar”);
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, as disclosed by Cai, to provide avatar graphics and a response message for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of avatar graphics and a response message to improve the effectiveness of a method for learning foreign languages over a network.
Burdis discloses
providing, by the electronic processor, the communication to a first artificial intelligence model ([0067], “the analysis module … performs … artificial intelligence”);
receiving, from the first artificial intelligence model, a response to the communication in the target spoken language ([0067], “the analysis module … performs … artificial intelligence … to determine a level of similarity between the user's response and the predefined response”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 2, Ichihashi discloses the following limitation with the exception of the underlined limitations.
further comprising: assigning, by the electronic processor ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from the first artificial intelligence model based on the first characteristic.
Cai discloses
a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from ([0052], “Flags … may … be presented below the avatars to convey the avatars nationality and therefore native language” Examiner notes that nationality can be a characteristic of an avatar.).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: assigning, by the electronic processor, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from, as disclosed by Cai, to provide flags and avatars for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of flags and avatars to improve the effectiveness of a method for learning foreign languages over a network.
Burdis discloses
the first artificial intelligence model based on the first characteristic ([0067], “the analysis module … performs … artificial intelligence”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: assigning, by the electronic processor, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, the first artificial intelligence model based on the first characteristic, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 3, Ichihashi discloses the following limitation with the exception of the underlined limitations.
further comprising: assigning, by the electronic processor ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic;
providing, via the graphic in the virtual reality environment, the second avatar in the first language learning station ([0048], “the processing means … classifies each learner into a plurality of learning levels …, sends an animation in order to provide an virtual reality environment for a plurality of learners”);
and providing, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic.
Cai discloses
a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic ([0046], “The user name .. of the avatar … is also presented” Examiner notes that a user name can be a characteristic of an avatar.);
and providing, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: assigning, by the electronic processor, providing, via the graphic in the virtual reality environment, the second avatar in the first language learning station, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from, a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic; and providing, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic, as disclosed by Cai, to provide an avatar user name, mixed language user interface, and avatar graphics for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of an avatar user name, mixed language user interface, and avatar graphics to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 4, Ichihashi discloses the following limitation with the exception of the underlined limitations.
further comprising: assigning, by the electronic processor ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), one or more conversation topics ([0048], “the processing means … sends an animation in order … to enable the plurality of learners have a conversation via voice”);
receiving, by the electronic processor via the network, a topic selection user input to select a first topic of the one or more conversation topics ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
providing the first topic to the first artificial intelligence model;
receiving, from the first artificial intelligence model, a question associated with the first topic;
and outputting, via the first avatar, the question.
Burdis discloses
providing the first topic to the first artificial intelligence model ([0067], “the analysis module … performs … artificial intelligence”);
receiving, from the first artificial intelligence model, a question associated with the first topic ([0063], “the prompt may include … a question that the user provides an answer to in the language”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: providing, by the electronic processor, one or more conversation topics; receiving, by the electronic processor via the network, a topic selection user input to select a first topic of the one or more conversation topics, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, providing the first topic to the first artificial intelligence model; receiving, from the first artificial intelligence model, a question associated with the first topic, as disclosed by Burdis, to provide an analysis module and a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module and a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
Cai discloses
and outputting, via the first avatar, the question ([0055], “the response message may be provided by a user of a … language learning client associated with the avatar”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: providing, by the electronic processor, one or more conversation topics; receiving, by the electronic processor via the network, a topic selection user input to select a first topic of the one or more conversation topics, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, and outputting, via the first avatar, the question, as disclosed by Cai, to provide a response message for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a response message to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 5, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the question comprises a spoken sentence, wherein the method further comprises: providing, by the electronic processor, a written sentence corresponding to the spoken sentence to the first user ([0029], “The learning support server … has a voice storage means … which stores model voice of a … a sentence”);
and providing, by the electronic processor, one or more possible responses to the question to the first user.
Burdis discloses
and providing, by the electronic processor, one or more possible responses to the question to the first user ([0063], “the prompt may include … a question that the user provides an answer to in the language”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: providing, by the electronic processor, one or more conversation topics; receiving, by the electronic processor via the network, a topic selection user input to select a first topic of the one or more conversation topics, wherein the question comprises a spoken sentence, wherein the method further comprises: providing, by the electronic processor, a written sentence corresponding to the spoken sentence to the first user, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, providing the first topic to the first artificial intelligence model; receiving, from the first artificial intelligence model, a question associated with the first topic, and providing, by the electronic processor, one or more possible responses to the question to the first user, as disclosed by Burdis, to provide a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 6, Ichihashi discloses the following limitations with the exception of the underlined limitation.
wherein the providing of the language assistance comprises: providing, by the electronic processor, a first translated sentence in the natural spoken language, the first translated sentence corresponding to the response ([0029], “a voice acquisition … which acquires a learner's voice uttered by the learner by requesting the learner to utter of … the sentence corresponding to the image and by sending the image to the learner terminal”);
and providing, by the electronic processor, one or more possible responses to the response to the first user;
and providing, by the electronic processor, one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses ([0033], “The processing means … works by a central processing unit (CPU (a processor))”).
Burdis discloses
and providing, by the electronic processor, one or more possible responses to the response to the first user ([0063], “the prompt may include … a question that the user provides an answer to in the language”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the providing of the language assistance comprises: providing, by the electronic processor, a first translated sentence in the natural spoken language, the first translated sentence corresponding to the response; and providing, by the electronic processor, one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, and providing, by the electronic processor, one or more possible responses to the response to the first user, as disclosed by Burdis, to provide a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 7, Ichihashi discloses the following limitations with the exception of the underlined limitations.
further comprising: providing, via the graphic in the virtual reality environment, a second avatar for a second user;
receiving, by the electronic processor via the network, a second communication from the first user to communicate with the second user ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
providing the second communication to the second user;
and receiving, by the electronic processor via the network, a user response from the second user to the first user ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and providing, via the second avatar in the virtual reality environment, the user response to the first user.
Cai discloses
further comprising: providing, via the graphic in the virtual reality environment, a second avatar for a second user ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”);
providing the second communication to the second user ([0009], “The system may provide a user interface and an asynchronous multimedia messaging service to encourage participants to communicate to each other”);
and providing, via the second avatar in the virtual reality environment, the user response to the first user ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receiving, by the electronic processor via the network, a second communication from the first user to communicate with the second user; and receiving, by the electronic processor via the network, a user response from the second user to the first user, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, further comprising: providing, via the graphic in the virtual reality environment, a second avatar for a second user; providing the second communication to the second user; and providing, via the second avatar in the virtual reality environment, the user response to the first user, as disclosed by Cai, to provide avatar graphics, a mixed language user interface, and an asynchronous multimedia messaging service for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of avatar graphics, a mixed language user interface, and an asynchronous multimedia messaging service to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 8, Ichihashi discloses the following limitation with the exception of the underlined limitations.
wherein the user response comprises a spoken sentence, wherein the method further comprises: converting, by the electronic processor, the spoken sentence to a written sentence ([0029], “The learning support server … has a voice storage means … which stores model voice of a … a sentence”);
providing, by the electronic processor, the written sentence to the first user;
and providing, by the electronic processor, one or more possible responses to the user response to the first user.
Burdis discloses
providing, by the electronic processor, the written sentence to the first user ([0065], “the response module … uses natural language processing functions, … such as … typing a sentence”);
and providing, by the electronic processor, one or more possible responses to the user response to the first user ([0066], “the analysis module … may compare a user's written response to a prompt against a predefined response”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receiving, by the electronic processor via the network, a second communication from the first user to communicate with the second user; and receiving, by the electronic processor via the network, a user response from the second user to the first user, wherein the user response comprises a spoken sentence, wherein the method further comprises: converting the spoken sentence to a written sentence, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, providing, by the electronic processor, the written sentence to the first user; and providing, by the electronic processor, one or more possible responses to the user response to the first user, as disclosed by Burdis, to provide a response module and an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a response module and an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 9, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the written sentence is in the target spoken language, wherein the method further comprises: providing, by the electronic processor, a first translated sentence in a natural spoken language corresponding to the written sentence ([0029], “a voice acquisition … which acquires a learner's voice uttered by the learner by requesting the learner to utter of … the sentence corresponding to the image and by sending the image to the learner terminal”);
and providing, by the electronic processor, one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses.
Burdis discloses
and providing, by the electronic processor, one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses ([0067], “the analysis module … performs … artificial intelligence … to determine a level of similarity between the user's response and the predefined response”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receiving, by the electronic processor via the network, a second communication from the first user to communicate with the second user; and receiving, by the electronic processor via the network, a user response from the second user to the first user, wherein the written sentence is in the target spoken language, wherein the method further comprises: providing, by the electronic processor, a first translated sentence in a natural spoken language corresponding to the written sentence, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, providing, by the electronic processor, the written sentence to the first user; and providing, by the electronic processor, one or more possible responses to the user response to the first user, and providing, by the electronic processor, one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 10, Ichihashi discloses the following limitation with the exception of the underlined limitation.
further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, providing, by the electronic processor, a language learning lesson in the second language learning station.
Cai discloses
and in response to the second station selection user input, providing, by the electronic processor, a language learning lesson in the second language learning station ([0030], “The learning content management service … may be configured to provide lessons”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, and in response to the second station selection user input, providing, by the electronic processor, a language learning lesson in the second language learning station, as disclosed by Cai, to provide a learning content management service for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a learning content management service to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 11, Ichihashi discloses the following limitation with the exception of the underlined limitation.
further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, accessing, by the electronic processor, an external language learning system.
Burdis discloses
and in response to the second station selection user input, accessing, by the electronic processor, an external language learning system ([0040], “Computer readable program instructions … can be downloaded to … an external computer”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, providing, by the electronic processor, the communication to a first artificial intelligence model; receiving, from the first artificial intelligence model, a response to the communication in the target spoken language, and in response to the second station selection user input, accessing, by the electronic processor, an external language learning system, as disclosed by Burdis, to provide computer readable program instructions for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of computer readable program instructions to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 12, Ichihashi discloses the following limitation with the exception of the underlined limitation.
further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, operating, by the electronic processor, a language game.
Cai discloses
and in response to the second station selection user input, operating, by the electronic processor, a language game ([0017], “The system and method … may be configured to find well matched learning partners to learn foreign languages in a fun way” Examiner notes that matched-up partners are frequently considered to be participating in a game.).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a method for bidirectional communication in a virtual reality environment, comprising: generating, by an electronic processor, the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receiving, by the electronic processor via a network, a station selection user input to select a first language learning station of the plurality of language learning stations; receiving, by the electronic processor via the network, a communication from the first user in a target spoken language; and providing, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: receiving, by the electronic processor via the network, a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, and in response to the second station selection user input, operating, by the electronic processor, a language game, as disclosed by Cai, to provide a well-matched learning partner configuration for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a well-matched learning partner configuration to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 13, Ichihashi discloses the following limitations with the exception of the underlined limitations.
A system for bidirectional communication in a virtual reality environment, comprising: a memory ([0033], “The storage means … has … a memory such as a ROM and a RAM”);
and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations ([0048], “the processing means … classifies each learner into a plurality of learning levels …, sends an animation in order to provide an virtual reality environment for a plurality of learners”);
receive a station selection user input to select a first language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
in response to the station selection user input, provide, via a graphic in the virtual reality environment, a first avatar in the first language learning station;
receive a communication from the first user in a target spoken language ([0045], “the voice transmission … sends the model voice to the learner terminal …, the voice transmission … has a means that sends a new model voice spoken by the … speaker”);
provide the communication to a first artificial intelligence model;
receive, from the first artificial intelligence model, a response to the communication in the target spoken language;
output, via the first avatar, the response in the target spoken language;
and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in ([0048], “a learner interaction … sends an animation in order to provide an virtual reality environment for a plurality of learners”) a natural spoken language of the first user ([0045], “voice transmission means … sends a … voice spoken by the … speaker”).
Cai discloses
in response to the station selection user input, provide, via a graphic in the virtual reality environment, a first avatar in the first language learning station ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”);
output, via the first avatar, the response in the target spoken language ([0055], “the response message may be provided by a user of a … language learning client associated with the avatar”);
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, as disclosed by Ichihashi, in response to the station selection user input, provide, via a graphic in the virtual reality environment, a first avatar in the first language learning station; output, via the first avatar, the response in the target spoken language, as disclosed by Cai, to provide avatar graphics and a response message for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of avatar graphics and a response message to improve the effectiveness of a method for learning foreign languages over a network.
Burdis discloses
provide the communication to a first artificial intelligence model ([0067], “the analysis module … performs … artificial intelligence”);
receive, from the first artificial intelligence model, a response to the communication in the target spoken language ([0067], “the analysis module … performs … artificial intelligence … to determine a level of similarity between the user's response and the predefined response”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 14, Ichihashi discloses the following limitation with the exception of the underlined limitations.
wherein the electronic processor is further configured to: assign ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from the first artificial intelligence model based on the first characteristic.
Cai discloses
a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from ([0052], “Flags … may … be presented below the avatars to convey the avatars nationality and therefore native language” Examiner notes that nationality can be a characteristic of an avatar.).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: assign, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from, as disclosed by Cai, to provide flags and avatars for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of flags and avatars to improve the effectiveness of a method for learning foreign languages over a network.
Burdis discloses
the first artificial intelligence model based on the first characteristic ([0067], “the analysis module … performs … artificial intelligence”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: assign, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, the first artificial intelligence model based on the first characteristic, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 15, Ichihashi discloses the following limitation with the exception of the underlined limitations.
wherein the electronic processor is further configured to: assign ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic;
provide, via the graphic in the virtual reality environment, the second avatar in the first language learning station ([0048], “the processing means … classifies each learner into a plurality of learning levels …, sends an animation in order to provide an virtual reality environment for a plurality of learners”);
and provide, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic.
Cai discloses
a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic ([0046], “The user name .. of the avatar … is also presented” Examiner notes that a user name can be a characteristic of an avatar.);
and provide, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: assign, provide, via the graphic in the virtual reality environment, the second avatar in the first language learning station, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, a first characteristic to the first avatar using the first artificial intelligence model, the first characteristic being associated with the first language learning station, and wherein the response is generated from, a second characteristic for a second avatar using a second artificial intelligence model, the second characteristic being different from the first characteristic, and provide, via the graphic in the virtual reality environment, a second communication from the second avatar to the first user based on the second characteristic, as disclosed by Cai, to provide flags and avatars for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of flags and avatars to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 16, Ichihashi discloses the following limitation with the exception of the underlined limitations.
wherein the electronic processor is further configured to: provide ([0033], “The processing means … works by a central processing unit (CPU (a processor))”), one or more conversation topics ([0048], “the processing means … sends an animation in order … to enable the plurality of learners have a conversation via voice”);
receive a topic selection user input to select a first topic of the one or more conversation topics ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
provide the first topic to the first artificial intelligence model;
receive, from the first artificial intelligence model, a question associated with the first topic;
and output, via the first avatar, the question.
Burdis discloses
provide the first topic to the first artificial intelligence model ([0067], “the analysis module … performs … artificial intelligence”);
receive, from the first artificial intelligence model, a question associated with the first topic ([0063], “the prompt may include … a question that the user provides an answer to in the language”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: provide one or more conversation topics; receive a topic selection user input to select a first topic of the one or more conversation topics, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, provide the first topic to the first artificial intelligence model; receive, from the first artificial intelligence model, a question associated with the first topic, as disclosed by Burdis, to provide an analysis module and a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module and a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
Cai discloses
and output, via the first avatar, the question ([0055], “the response message may be provided by a user of a … language learning client associated with the avatar”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: provide one or more conversation topics; receive a topic selection user input to select a first topic of the one or more conversation topics, as disclosed by Ichihashi, in response to the station selection user input, provide, via a graphic in the virtual reality environment, a first avatar in the first language learning station; output, via the first avatar, the response in the target spoken language, and output, via the first avatar, the question, as disclosed by Cai, to provide a response message for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a response message to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 17, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the question comprises a spoken sentence, wherein the electronic processor is further configured to: provide a written sentence corresponding to the spoken sentence to the first user ([0029], “The learning support server … has a voice storage means … which stores model voice of a … a sentence”);
and provide one or more possible responses to the question to the first user.
Burdis discloses
and provide one or more possible responses to the question to the first user ([0063], “the prompt may include … a question that the user provides an answer to in the language”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: provide one or more conversation topics; receive a topic selection user input to select a first topic of the one or more conversation topics, wherein the question comprises a spoken sentence, wherein the electronic processor is further configured to: provide a written sentence corresponding to the spoken sentence to the first user, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, provide the first topic to the first artificial intelligence model; receive, from the first artificial intelligence model, a question associated with the first topic, and provide one or more possible responses to the question to the first user, as disclosed by Burdis, to provide a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 18, Ichihashi discloses the following limitations with the exception of the underlined limitation.
wherein to provide the language assistance: the electronic processor is configured to: provide a first translated sentence in the natural spoken language, the first translated sentence corresponding to the response ([0029], “a voice acquisition … which acquires a learner's voice uttered by the learner by requesting the learner to utter of … the sentence corresponding to the image and by sending the image to the learner terminal”);
and provide one or more possible responses to the response to the first user;
and provide one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses ([0033], “The processing means … works by a central processing unit (CPU (a processor))”).
Burdis discloses
and provide one or more possible responses to the response to the first user ([0063], “the prompt may include … a question that the user provides an answer to in the language”);
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein to provide the language assistance, the electronic processor is configured to: provide a first translated sentence in the natural spoken language, the first translated sentence corresponding to the response; and provide one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive from the first artificial intelligence model, a response to the communication in the target spoken language, and provide one or more possible responses to the response to the first user, as disclosed by Burdis, to provide a prompt for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a prompt to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 19, Ichihashi discloses the following limitations with the exception of the underlined limitations.
wherein the electronic processor is further configured to: provide, via the graphic in the virtual reality environment, a second avatar for a second user;
receive a second communication from the first user to communicate with the second user ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
provide the second communication to the second user;
and receive a user response from the second user to the first user ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and provide, via the second avatar in the virtual reality environment, the user response to the first user.
Cai discloses
wherein the electronic processor is further configured to: provide, via the graphic in the virtual reality environment, a second avatar for a second user ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”);
provide the second communication to the second user ([0009], “The system may provide a user interface and an asynchronous multimedia messaging service to encourage participants to communicate to each other”);
and provide, via the second avatar in the virtual reality environment, the user response to the first user ([0052], “The mixed language user interface … includes avatar graphics … representing … users of language learning”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receive a second communication from the first user to communicate with the second user; and receive a user response from the second user to the first user, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, wherein the electronic processor is further configured to: provide, via the graphic in the virtual reality environment, a second avatar for a second user; provide the second communication to the second user; and provide, via the second avatar in the virtual reality environment, the user response to the first user, as disclosed by Cai, to provide avatar graphics, a mixed language user interface, and an asynchronous multimedia messaging service for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of avatar graphics, a mixed language user interface, and an asynchronous multimedia messaging service to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 20, Ichihashi discloses the following limitation with the exception of the underlined limitations.
wherein the user response comprises a spoken sentence, wherein the electronic processor is further configured to: convert the spoken sentence to a written sentence ([0029], “The learning support server … has a voice storage means … which stores model voice of a … a sentence”);
provide the written sentence to the first user;
and provide one or more possible responses to the user response to the first user.
Burdis discloses
provide the written sentence to the first user ([0065], “the response module … uses natural language processing functions, … such as … typing a sentence”);
and provide one or more possible responses to the user response to the first user ([0066], “the analysis module … may compare a user's written response to a prompt against a predefined response”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configure to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receive a second communication from the first user to communicate with the second user; and receive a user response from the second user to the first user, wherein the user response comprises a spoken sentence, wherein the electronic processor is further configured to: convert the spoken sentence to a written sentence, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, provide the written sentence to the first user; and provide one or more possible responses to the user response to the first user, as disclosed by Burdis, to provide a response module and an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of a response module and an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 21, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the written sentence is in the target spoken language, wherein the electronic processor is further configured to: provide a first translated sentence in a natural spoken language corresponding to the written sentence ([0029], “a voice acquisition … which acquires a learner's voice uttered by the learner by requesting the learner to utter of … the sentence corresponding to the image and by sending the image to the learner terminal”);
and provide one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses.
Burdis discloses
and provide one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses ([0067], “the analysis module … performs … artificial intelligence … to determine a level of similarity between the user's response and the predefined response”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, receive a second communication from the first user to communicate with the second user; and receive a user response from the second user to the first user, wherein the written sentence is in the target spoken language, wherein the electronic processor is further configured to: provide a first translated sentence in a natural spoken language corresponding to the written sentence, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, provide the written sentence to the first user; and provide one or more possible responses to the user response to the first user, and provide one or more second translated sentences in the natural spoken language corresponding to the one or more possible responses, as disclosed by Burdis, to provide an analysis module for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of an analysis module to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 22, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the electronic processor is further configured to: receive a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, provide a language learning lesson in the second language learning station.
Cai discloses
and in response to the second station selection user input, provide a language learning lesson in the second language learning station ([0030], “The learning content management service … may be configured to provide lessons”).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is configured to: receive a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, in response to the station selection user input, providing, via a graphic in the virtual reality environment, a first avatar in the first language learning station; outputting, via the first avatar, the response in the target spoken language, and in response to the second station selection user input, provide a language learning lesson in the second language learning station, as disclosed by Cai, to provide a learning content management service for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a learning content management service to improve the effectiveness of a method for learning foreign languages over a network.
In regards to claim 23, Ichihashi discloses the following limitation with the exception of the underlined limitation.
wherein the electronic processor is further configured to: receive, via a network, a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, access an external language learning system.
Burdis discloses
and in response to the second station selection user input, access an external language learning system ([0040], “Computer readable program instructions … can be downloaded to … an external computer”).
Ichihashi and Burdis are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, wherein the electronic processor is further configured to: receive, via a network, a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, provide the communication to a first artificial intelligence model; receive, from the first artificial intelligence model, a response to the communication in the target spoken language, and in response to the second station selection user input, access an external language learning system, as disclosed by Burdis, to provide computer readable program instructions for dynamically adapting language learning to a user's language proficiency. One skilled in the art would understand and recognize the value of the addition of computer readable program instructions to improve the effectiveness of a system that dynamically adapts language learning to a user's language proficiency.
In regards to claim 24, Ichihashi discloses the following limitation with the exception of the underlined limitation.
further comprising: receive a second station selection user input for a second language learning station of the plurality of language learning stations ([0032], “A learning support server … corresponds to a computer being connected to a network …, and has a processing means … that supports a learner for learning a language according to a request from a learner terminal”);
and in response to the second station selection user input, operate a language game.
Cai discloses
and in response to the second station selection user input, operate a language game ([0017], “The system and method … may be configured to find well matched learning partners to learn foreign languages in a fun way” Examiner notes that matched-up partners are frequently considered to be participating in a game.).
Ichihashi and Cai are considered analogous to the claimed invention because they are in the field of language learning. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the applicant’s invention for a system for bidirectional communication in a virtual reality environment, comprising: a memory; and an electronic processor coupled with the memory, wherein the electronic processor is configured to: generate the virtual reality environment for a first user, the virtual reality environment comprising a plurality of language learning stations, receive a station selection user input to select a first language learning station of the plurality of language learning stations; receive a communication from the first user in a target spoken language; and provide, via the graphic in the virtual reality environment, language assistance corresponding to the response in a natural spoken language of the first user, further comprising: receive a second station selection user input for a second language learning station of the plurality of language learning stations, as disclosed by Ichihashi, in response to the station selection user input, provide, via a graphic in the virtual reality environment, a first avatar in the first language learning station; output, via the first avatar, the response in the target spoken language, and in response to the second station selection user input, operate a language game, as disclosed by Cai, to provide a well-matched learning partner configuration for a method for learning foreign languages over a network. One skilled in the art would understand and recognize the value of the addition of a well-matched learning partner configuration to improve the effectiveness of a method for learning foreign languages over a network.
Contact Information
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to Lisa Antoine whose telephone number is
(571) 272-4252 and whose email address is lantoine@uspto.gov. The examiner can be reached Monday-Thursday, 7:30 am – 5:30 pm CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai, can be reached on (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Publication Information
Information regarding the status of published or unpublished applications may be
obtained from the Patent Center. Unpublished application information in the Patent
Center is available to registered users. To file and manage patent submissions in the
Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about the
Patent Center and https://www.uspto.gov/patents/docx for information about filing in
DOCX format. For additional questions, contact the Electronic Business Center (EBC)
at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer
Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LISA H ANTOINE/
Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715