DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The previous objections to the claims are withdrawn in light of the amendments to the claims (filed 01/13/2026).
Applicant's arguments regarding the rejection of the claims under 35 U.S.C. § 101 have been fully considered but they are not persuasive.
Applicant first argues that “the communication method… are implemented by the first user device and at least a second user device, which explains that the claims of the instant application are meaningful limits on practicing the abstract idea(s)” (Remarks, filed 01/13/2026, p. 8). Examiner respectfully disagrees and reiterates that the first and second user device(s) are merely generic computing components recited at a high level of generality, and these user devices therefore amount to no more than mere instructions to apply the exception using a generic computing device. See paragraph 0023 of Applicant’s specification: “user devices UE, e.g. mobile devices, desktops or laptops.”
Applicant next argues that, inter alia, the following claim elements, at least in combination, amount to significantly more than the abstract idea: “a three-dimensional (3D) immersive virtual environment for the users to integrate the education with the management system to increase the interactivity in the virtual environment, and further the instant application performs the learning evaluation of the second user devices…; and the discussion teaching mode in the virtual environment for discussion” (Remarks, filed 01/13/2026, p. 8). Examiner respectfully disagrees. Examiner again reiterates that all the devices recited by the claim are merely generic computing components that are recited at a high level of generality. Further, no alleged technological improvement regarding the intractability within the virtual environment has been disclosed; instead, the claims encompass generic computing components which perform routine functions in order to merely virtualize/automate a typical classroom setting, i.e., performing typical and routine classroom acts, including allowing the students to take part in discussions and evaluating a student’s performance.
See rejection of claims under 35 U.S.C. § 101, as presented in detail below.
Applicant’s arguments regarding the rejection of the claims under 35 U.S.C. § 103 (Remarks, filed 01/13/2026, pp. 8-11) have been considered but most of which are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument—with one exception, which is addressed below.
As necessitated by amendments to the claims, the previous rejection of claim 1 as being unpatentable over Du in view of Yerli has been withdrawn, and a new rejection as being unpatentable over Du in view of Yerli and Field has been raised.
However, with regards to the combination of Du and Yerli, Applicant argues there is no motivation to combine Du and Yerli and that, inter alia, the two references are too different, and “[t]herefore, those skilled in the art cannot easily combine the structured teaching environment of Du and the unstructured social interaction of Yerli” (Remarks, filed 01/13/2026, p. 10). Examiner respectfully disagrees. Although Applicant points out at least the alleged differences in logic, control mechanisms, information sources, and architecture of the systems, Examiner notes that the only element from Yerli which has been combined with the system of Du is the ability of the first and second user to log in to the system, which was already implied but not explicitly disclosed by Du. Du clearly teaches that users, including an instructor and students, can enter into the virtual classroom as avatars on their separate user devices, which implies a login system. Yerli explicitly teaches users logging into a virtual classroom using a cloud streaming system and their own user devices. Furthermore, Examiner directs Applicant to page 7 of the previous Office Action, where a reason had been presented to combine the login system of Yerli with the virtual classroom system of Du: “in order to allow users to keep accounts with their own data and to give different privileges to different account types (Yerli, par. 0069, 0208).”
Additionally, Examiner directs Applicant’s attention to MPEP § 2141.II: “‘A person of ordinary skill in the art is also a person of ordinary creativity, not an automaton.’ KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 421, 82 USPQ2d 1385, 1397 (2007). ‘[I]n many cases a person of ordinary skill will be able to fit the teachings of multiple patents together like pieces of a puzzle.’ Id. at 420, 82 USPQ2d 1397. Office personnel may also take into account ‘the inferences and creative steps that a person of ordinary skill in the art would employ.’ Id. at 418, 82 USPQ2d at 1396.” Considering this, it would be reasonable for a person having ordinary skill in the art to have combined the login system of Yerli with the virtual classroom system of Du, despite any technical differences between the virtual environments themselves disclosed therein.
See rejections of the claims under 35 U.S.C. § 103, as presented in detail below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 5-13, and 17-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Amended claims 1 and 11 recite that there is “a learning evaluation corresponding to at least the second user device… according to a participation performance of at least the second user device in the virtual classroom… an interaction performance of at least the second user device.” However, any form of evaluating a user device in terms of its (i.e., the device’s) learning, participation, and/or interaction was not present in the original disclosure. Therefore, the various evaluations of the user device are each viewed as new matter.
Dependent claims 5-10, 12-13, and 17-20 are rejected for depending upon rejected claims 1 and 11.
The following dependent claims are further rejected for reciting similar new matter:
Claims 5 and 17 recite “the interaction performance of at least the second user device refers to at least one of a number of raising hands and a number of answering questions of at least the second user device.” Specifically, the element of the device raising hands or answering questions was not present in the original disclosure.
Claims 6-7 recite “the learning evaluation corresponding to at least the second user device.” Specifically, the device being evaluated for its learning was not present in the original disclosure.
Claim 8 recites “the first user device determines to join any group.” Specifically the device making this determination was not present in the original disclosure.
Claims 9 and 19 recite “the first user device determines to activate a discussion space.” Specifically, the device determining to make this activation was not present in the original disclosure.
Claim 13 recites “the first user device determines to activate the quiz” and “the learning evaluation of at least the second user device.” Specifically, the elements of a device which determines to make this activation and a device whose learning is evaluated were not present in the original disclosure.
Claim 18 recites “the first user device determines to activate a discussion teaching mode” and “the first user device determines to enter any group according to a group index.” Specifically, the elements of a device determining to perform either of these actions were not present in the original disclosure.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 5-13, and 17-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 is rejected because the intended meaning of the limitation “determining, by the first user device, a learning evaluation corresponding to at least the second user device according to a participation performance of at least the second user device in the virtual classroom; wherein the participation performance includes at least one of an attendance and an interaction performance of at least the second user device in the virtual classroom” is unclear.
Specifically, it appears the claim is reciting a device which undergoes a learning evaluation itself, which further evaluates a participation performance of that device and possibly an interaction performance of the device—rather than performing those evaluations on the user who is merely using the device. In other words, the underlined limitations instead describe a device which is evaluated for its participation performance, etc.
For purposes of examination, Examiner is interpreting these and equivalent instances of “the second user device” to simply mean “the second user, who is using the second user device,” which corresponds with what is described in Applicant’s specification (paragraphs 0005-0006, 0023-0028).
System claim 11 is also rejected for reciting similar limitations, and claims 5-10, 12-13, and 17-20 are rejected for depending upon rejected independent claims 1 and 11.
The following claims are rejected under the same rationale provided above for claim 1:
Claims 5 and 17 recite “the interaction performance of at least the second user device refers to at least one of a number of raising hands and a number of answering questions of at least the second user device”
Claims 6-7 recite “the learning evaluation corresponding to at least the second user device”
Claim 8 recites “the first user device determines to join any group”
Claims 9 and 19 recite “the first user device determines to activate a discussion space”
Claim 13 recites “the first user device determines to activate the quiz” and “the learning evaluation of at least the second user device”
Claim 18 recites “the first user device determines to activate a discussion teaching mode” and “the first user device determines to enter any group according to a group index”
For purposes of examination, as explained above, each of the preceding instances of “the first user device” or “the second user device” is being interpreted to mean “the first/second user, who is using the first/second device.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 5-13, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea(s) without significantly more.
Regarding Claim 1, analyzed as the representative claim:
[Step 1] Claim 1 recites “A communication method…” which falls within the “process” statutory category of invention under 35 U.S.C. § 101.
[Step 2A – Prong 1] Claim 1 recites “A communication method in a virtual environment, comprising: initiating, by a first user device, a virtual classroom configured to render a spatial audio in a three-dimensional (3D) space through an education Metaverse application program; attending, by at least a second user device, the virtual classroom through an education Metaverse application program; activating, by the first user, a class teaching mode; wherein the first user device and at least the second user device log in to a cloud streaming system through the corresponding education Metaverse application program to represent a first Avatar and a second Avatar for communication in the virtual classroom, determining, by the first user device, a learning evaluation corresponding to at least the second user device according to a participation performance of at least the second user device in the virtual classroom; wherein the participation performance includes at least one of an attendance and an interaction performance of at least the second user device in the virtual classroom.” The bolded limitations, under their broadest reasonable interpretation, encompass methods of organizing human activity (managing personal behavior or relationships or interactions between people – including social activities, teaching, and following rules or instructions) or mental processes (including observation, evaluation, judgment, and opinion). Specifically, other than reciting that the determining step is performed by “a first user device,” nothing in the claim precludes that determining step from practically being performed by a human and/or in the human mind. Furthermore, the claim encompasses a student(s) attending a classroom, a teacher “activating” a teaching mode—which under its broadest reasonable interpretation could be considered simply beginning to teach—and a teacher evaluating the student’s learning and participation. Therefore, the attending and activating steps encompass methods of organizing human activity, and the determining step encompasses a mental process. Accordingly, the claim recites an abstract idea(s).
[Step 2A – Prong 2] The judicial exception is not integrated into a practical application. Specifically, the claim recites the additional elements of an education Metaverse application program, user devices, and a cloud streaming system, wherein the computing devices and executed computer program are recited at a high level of generality and merely virtualize the attending and activating steps, and automate the determining step. Therefore, this additional element amounts to no more than mere instructions to apply the exception using a generic computing device, which does not impose any meaningful limits on practicing the abstract idea(s). Thus, the claim is directed to an abstract idea(s).
[Step 2B] The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea(s) into a practical application, the additional element of a program executing on users’ computing devices for performing the method steps amounts to no more than mere instructions to apply the exception using a generic computing device, which cannot provide an inventive concept. Accordingly, representative claim 1 is not patent eligible.
Claims 5-10 are dependent on representative claim 1 and include all the limitations of claim 1—and claims 12-13 and 17-20 are dependent on like system claim 11 and include all the limitations of claim 11. Therefore, the dependent claims recite the same abstract idea(s) as those recited in the independent claim or contain limitations drawn to generic computer components and/or reciting extra solution activities. While the dependent claims may have a narrower scope than the representative claim, no claim contains an additional element to integrate the abstract idea(s) into a practical application or to render an inventive concept that transforms the corresponding claim into a patent eligible application of the otherwise ineligible abstract idea(s). Thereby, claims 5-10, 12-13, and 17-20 are also patent ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Du in view of Yerli and Field.
Regarding Claim 1, Du discloses initiating, by a first user device, a virtual classroom configured to render a spatial audio in a three-dimensional (3D) space through an education Metaverse application program (fig. 3: initiation of VR classroom at step 202; abstract: “providing a three-dimensional (3D) virtual reality (VR) classroom;” par. 0021: “each participant is represented by a three-dimensional (3D) avatar located in a 3D virtual meeting space;” par. 0024: “a spatial audio effect may be adjusted to draw participants' attention to the presentation;” fig. 1: user device 14 worn by instructor I; par. 0037: “the rendering 11 of the 3D VR classroom VC is generated by the electronic processing device 18, and provided via the headset(s) 14 worn by the instructor I and each participant P;” par. 0019: “a non-transitory computer readable medium stores instructions executable by at least one electronic processor to perform a virtual classroom method”);
attending, by at least a second user device, the virtual classroom through an education Metaverse application program (figs. 1-3; abstract: plurality of students attend the virtual classroom; fig. 1: second user device 14 worn by students P; par. 0037: “the rendering 11 of the 3D VR classroom VC is generated by the electronic processing device 18, and provided via the headset(s) 14 worn by the instructor I and each participant P;” par. 0019: “a non-transitory computer readable medium stores instructions executable by at least one electronic processor to perform a virtual classroom method”);
activating, by the first user, a class teaching mode (par. 0031: “The VR classroom VC can serve as an AR/VR setting for a presentation 12 presented by an instructor I to one or more participants P. The rendering 11 of the VR classroom VC can include the presentation 12 (e.g., a presenter and/or a graphic), one or more participants P (four of which are shown in FIG. 1), and anything else required for the presentation 12. The VR classroom VR is an immersive VR environment, in which each participant's view changes as the participant moves his or her head, so as to simulate being in an actual 3D environment;” Examiner notes that because the instructor is able to give a presentation to the students, that instructor inherently has to begin the presentation at some point, thus “activating” the class teaching mode); and
determining, by the first user device, a learning evaluation corresponding to at least the second user device according to a participation performance of at least the second user device in the virtual classroom (fig. 1: first user I using first user device 14 and second user P using second user device 14; par. 0029: “Moreover, the attentiveness of the participants as a group and/or individually can be monitored and provided to the presenter of the presentation in real-time, for example in a window of the presenter's VR display, so that the presenter is made aware of the attentiveness of the group and/or of individual participants during the presentation…The group and/or individual attentiveness data can also be compiled and provided as feedback to the VR classroom organizer… after the VR class is completed so the organizer can assess how well the presentation was received”);
Du implies a login system but does not explicitly disclose one. However, Yerli discloses wherein the first user device and at least the second user device log in to a cloud streaming system through the corresponding education Metaverse application program to represent a first Avatar and a second Avatar for communication in the virtual classroom (par. 0069: “the one or more cloud server computers are further configured to authenticate the user through login authentication credentials comprising a personal identification number (PIN), or username and password, or a combination thereof;” fig. 1: user A uses client device A to log in to the virtual environment and is represented by User Graphical Rep. A, and user B uses client device B to log in to the virtual environment and is represented by User Graphical Rep. B; par. 0208: “may provide the teacher with special administrative rights;” Examiner notes that this indicates that the first user’s login is distinct from the second user’s login because the first user is granted additional administrative privileges and that the user devices are required for the login process).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the login system of Yerli with the virtual classroom system of Du in order to allow users to keep accounts with their own data and to give different privileges to different account types (Yerli, par. 0069, 0208).
Du discloses a learning evaluation according to a participation performance of the second user but not a participation performance which includes attendance or interaction performance. However, Du modified by Field discloses the participation performance includes at least one of an attendance and an interaction performance of at least the second user device in the virtual classroom (Field, par. 0028: “data collection included in the invention is the capture of behavioral information and the conversion of the information into data using other types of electronic equipment that monitor and detect information relating to behaviors and interactions. For example, … the number of times a student participates in class during a year, … the number of times a student raises a hand in response to a teacher's question, etc. The present invention captures the behavioral information in real time;” par. 0032: “handheld or classroom-based student or teacher devices”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the interaction performance evaluation of Field with the system of Du in order to offer an objective means of measuring a student’s participation or interaction in class (Field, abstract; par. 0028).
Regarding Claim 5, Du modified by Field further discloses the interaction performance of at least the second user device refers to at least one of a number of raising hands and a number of answering questions of at least the second user device in the virtual classroom (Field, par. 0028: “data collection included in the invention is the capture of behavioral information and the conversion of the information into data using other types of electronic equipment that monitor and detect information relating to behaviors and interactions. For example, … the number of times a student participates in class during a year, … the number of times a student raises a hand in response to a teacher's question, etc. The present invention captures the behavioral information in real time;” par. 0032: “handheld or classroom-based student or teacher devices”).
The combination of the virtual classroom system of Du with the participation performance of Field described above for Claim 1 would have included this particular interaction performance.
Regarding Claim 11, Du discloses a plurality of user devices, equipped with an education Metaverse application program (fig. 1: plurality of user devices 14; par. 0037: “the rendering 11 of the 3D VR classroom VC is generated by the electronic processing device 18, and provided via the headset(s) 14 worn by the instructor I and each participant P”); and
a cloud streaming system (par. 0033: “the electronic processing device 18 can be embodied as a… cloud computing resource”), comprising:
a real-time video module, configured to provide a video streaming service (fig. 3, step 204: “Present VR education material to all students, e.g. VR 3D objects, presentations, video;” par. 0031: “The camera 15 can be used to project the rendering 11 of the VR classroom VC into a viewpoint of each participant P”);
an audio module, configured to provide an audio streaming service (par. 0039: “the presentation 12 comprises an audio component, and the adjusting operation 108 includes adjusting the audio component of the presentation 12. This can include, for example, raising or lowering a volume of the audio component of the presentation 12, or adjust a spatial or directional setting of the audio component”);
a connection platform module, configured to provide a multi-people connection service for the cloud streaming system (fig. 1: multiple people are connected within the same virtual classroom; fig. 3); and
wherein after a first user device initiates a virtual classroom through the education Metaverse application program of a first user device of the plurality of user devices (fig. 3: initiation of VR classroom at step 202; fig. 1: user device 14 worn by instructor I; par. 0037: “the rendering 11 of the 3D VR classroom VC is generated by the electronic processing device 18, and provided via the headset(s) 14 worn by the instructor I and each participant P;” par. 0019: “a non-transitory computer readable medium stores instructions executable by at least one electronic processor to perform a virtual classroom method”), at least a second user device attends the virtual classroom through the education Metaverse application program of a second user device of the plurality of user devices (figs. 1-3; abstract: plurality of students attend the virtual classroom; fig. 1: second user device 14 worn by students P; par. 0037: “the rendering 11 of the 3D VR classroom VC is generated by the electronic processing device 18, and provided via the headset(s) 14 worn by the instructor I and each participant P;” par. 0019: “a non-transitory computer readable medium stores instructions executable by at least one electronic processor to perform a virtual classroom method”);
the first user device activates a class teaching mode (fig. 1: first user I using first user device 14; par. 0031: “The VR classroom VC can serve as an AR/VR setting for a presentation 12 presented by an instructor I to one or more participants P. The rendering 11 of the VR classroom VC can include the presentation 12 (e.g., a presenter and/or a graphic), one or more participants P (four of which are shown in FIG. 1), and anything else required for the presentation 12. The VR classroom VR is an immersive VR environment, in which each participant's view changes as the participant moves his or her head, so as to simulate being in an actual 3D environment;” Examiner notes that because the instructor is able to give a presentation to the students, that instructor inherently has to begin the presentation at some point, thus “activating” the class teaching mode);
wherein the virtual classroom is configured to render a spatial audio in a three-dimensional (3D) space (abstract: “providing a three-dimensional (3D) virtual reality (VR) classroom;” par. 0021: “each participant is represented by a three-dimensional (3D) avatar located in a 3D virtual meeting space;” par. 0024: “a spatial audio effect may be adjusted to draw participants' attention to the presentation”),
wherein a learning evaluation corresponding to at least the second user device is determined by the first user device according to a participation performance of at least the second user device in the virtual classroom (fig. 1: first user I using first user device 14 and second user P using second user device 14; par. 0029: “Moreover, the attentiveness of the participants as a group and/or individually can be monitored and provided to the presenter of the presentation in real-time, for example in a window of the presenter's VR display, so that the presenter is made aware of the attentiveness of the group and/or of individual participants during the presentation…The group and/or individual attentiveness data can also be compiled and provided as feedback to the VR classroom organizer… after the VR class is completed so the organizer can assess how well the presentation was received”).
Du implies a login system but does not explicitly disclose one. However, Yerli discloses a cloud login module, configured to allow the plurality of user devices to log in to the cloud streaming system and direct the plurality of user devices to at least one of the real-time video module, the audio module and the connection platform module (par. 0069: “the one or more cloud server computers are further configured to authenticate the user through login authentication credentials comprising a personal identification number (PIN), or username and password, or a combination thereof;” fig. 1: user A uses client device A to log in to the virtual environment, user B uses client device B to log in to the virtual environment, and both users are able to simultaneously join the same virtual environment with video and audio capabilities);
wherein the first user device and at least the second user device log in to the cloud streaming system through the corresponding education Metaverse application program to represent a first Avatar and a second Avatar for communication in the virtual classroom (fig. 1: user A uses client device A to log in to the virtual environment and is represented by User Graphical Rep. A, and user B uses client device B to log in to the virtual environment and is represented by User Graphical Rep. B; par. 0069: “the one or more cloud server computers are further configured to authenticate the user through login authentication credentials comprising a personal identification number (PIN), or username and password, or a combination thereof”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the login system of Yerli with the virtual classroom system of Du in order to allow users to keep accounts with their own data and to give different privileges to different account types (Yerli, par. 0069, 0208).
Du discloses a learning evaluation according to a participation performance of the second user but not a participation performance which includes attendance or interaction performance. However, Du modified by Field discloses the participation performance includes at least one of an attendance and an interaction performance of at least the second user device in the virtual classroom (Field, par. 0028: “data collection included in the invention is the capture of behavioral information and the conversion of the information into data using other types of electronic equipment that monitor and detect information relating to behaviors and interactions. For example, … the number of times a student participates in class during a year, … the number of times a student raises a hand in response to a teacher's question, etc. The present invention captures the behavioral information in real time;” par. 0032: “handheld or classroom-based student or teacher devices”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the interaction performance evaluation of Field with the system of Du in order to offer an objective means of measuring a student’s participation or interaction in class (Field, abstract; par. 0028).
Regarding Claim 17, Du modified by Field further discloses the interaction performance of at least the second user device refers to at least one of a number of raising hands and a number of answering questions of at least the second user device in the virtual classroom (Field, par. 0028: “data collection included in the invention is the capture of behavioral information and the conversion of the information into data using other types of electronic equipment that monitor and detect information relating to behaviors and interactions. For example, … the number of times a student participates in class during a year, … the number of times a student raises a hand in response to a teacher's question, etc. The present invention captures the behavioral information in real time;” par. 0032: “handheld or classroom-based student or teacher devices”).
The combination of the virtual classroom system of Du with the participation performance of Field described above for Claim 1 would have included this particular interaction performance.
Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Du, Yerli, and Field as applied to claims 1 and 11 above, and further in view of Sharma.
Regarding Claim 6, modified Du does not disclose a remote management module. However, Sharma discloses pre-arranging, by the first user device, a class schedule, a quiz and a course outline related to the virtual classroom (fig. 1; par. 0064: “a client device 104 is associated with a teacher 104;” par. 0007: “The method includes creating learning curriculum map;” par. 0375: “User will also create the syllabus;” par. 380: “Mapping of standard level and course type with standard goals and objective, mapping of learning outcomes with unit/topics, mapping of learning outcomes with lesson plan, mapping same learning outcome with multiple lesson plans. Add multiple lesson plans under a particular unit/topic;” par. 1189: “the method 4800 includes generating learning plans for the learners based on the learning curriculum map and learning course resources. The learning plans to be learnt at time, path, place and pace of the learners;” par. 0429: “Different types of quizzes will be created with number of checks. While creating quizzes, the dynamic user selects question by lesson plan filter, standard goal or question description or objective description etc.”) on a remote management module (par. 0066: “the IEMS 116 is a Software-as-a-Service (SaaS) web application system hosted on the cloud 114”); and
activating, by the first user device, the quiz in the virtual classroom to evaluate the learning evaluation corresponding to at least the second user device (par. 0429: “The dynamic user assigns quiz(zes) either to all sets or for random sets. The student will then have to read those sets from their login and must attempt the quiz;” Examiner notes that the act of the teacher assigning the quiz to the student who must then complete it is being interpreted as activating the quiz; par. 0064: “a client device 102 is associated with a student, a client device 104 is associated with a teacher 104”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual classroom of Du with the described remote management module of Sharma in order to allow teachers to more easily plan their class (Sharma, abstract).
Regarding Claim 13, modified Du does not disclose a remote management module. However, Sharma discloses a remote management module, configured to provide the first user device to pre-arrange a class schedule, a quiz, and a course outline related to the virtual classroom (fig. 1; par. 0064: “a client device 104 is associated with a teacher 104;” par. 0007: “The method includes creating learning curriculum map;” par. 0375: “User will also create the syllabus;” par. 380: “Mapping of standard level and course type with standard goals and objective, mapping of learning outcomes with unit/topics, mapping of learning outcomes with lesson plan, mapping same learning outcome with multiple lesson plans. Add multiple lesson plans under a particular unit/topic;” par. 1189: “the method 4800 includes generating learning plans for the learners based on the learning curriculum map and learning course resources. The learning plans to be learnt at time, path, place and pace of the learners;” par. 0429: “Different types of quizzes will be created with number of checks. While creating quizzes, the dynamic user selects question by lesson plan filter, standard goal or question description or objective description etc.”) on the remote management module (par. 0066: “the IEMS 116 is a Software-as-a-Service (SaaS) web application system hosted on the cloud 114”); and
the first user device determines to activate the quiz in the virtual classroom to evaluate the learning evaluation of at least the second user device (par. 0429: “The dynamic user assigns quiz(zes) either to all sets or for random sets. The student will then have to read those sets from their login and must attempt the quiz;” Examiner notes that the act of the teacher assigning the quiz to the student who must then complete it is being interpreted as activating the quiz; par. 0064: “a client device 102 is associated with a student, a client device 104 is associated with a teacher 104”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual classroom of Du with the described remote management module of Sharma in order to allow teachers to more easily plan their class (Sharma, abstract).
Claims 7 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Du, Yerli, and Field as applied to claims 1 and 11 above, and further in view of Walton.
Regarding Claim 7, modified Du does not explicitly disclose arranging a quiz. However, Walton discloses arranging, by the first user device, a quiz through the education Metaverse application program to evaluate the learning evaluation corresponding to at least the second user device (par. 0031: “the option to select the assessments content account 1004, where the educator 30 can create tests and academic performance measures in the virtual reality environment;” fig. 1: educator’s computer 30 and learner’s computer 20).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the quiz arrangement and creation of Walton with the system of Du in order to provide a method of objective evaluation (Walton, par. 0031).
Regarding Claim 12, modified Du does not explicitly disclose a text communication module. However, Walton discloses the cloud streaming system further includes a text communication module configured to provide a text, an audio message, an emoticon or an image conversation service (par. 0031: “allow educator 30 to interact via audio only, text only, a combination of audio and text, or video conferencing;” par. 0043: “The educator 30 can establish a separate room in the environment for learners 20 to chat informally (e.g. a Chat Room)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the text communication module of Walton with the system of Du in order to provide users an additional means of communication within the virtual environment (Walton, par. 0031).
Claims 8-9 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Du, Yerli, and Field as applied to claims 1 and 11 above, and further in view of Walton and Sun.
Regarding Claim 8, modified Du does not disclose a discussion mode with grouped users. However, Walton discloses activating, by the first user device, a discussion teaching mode to group at least the second user device for discussion (par. 0043: “although multiple sections of the same class can be arranged to run concurrently, but separately. By integrating the speech technology, and facilitating small group interaction, greater access to the discussion is created, with the least amount of frustration;” par. 0039: “an environment where the learners feel more comfortable expressing their ideas, which can increase their motivation to participate substantively in class discussions;” fig. 1: educator’s computer 30 and learner’s computer 20),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the grouped discussion mode of Walton with the virtual classroom of Du in order to encourage conversation amongst all students and to make them more comfortable while doing so—since students are well-known to feel more comfortable talking in small groups rather than in front of the entire classroom (Walton, pars. 0039-0043).
Modified Du does not explicitly disclose a group index. However, Sun discloses the first user device determines to join any group according to a group index (fig. 7: first user is able to join any group by group index, which in this example is 1-2; par. 0110: “a teacher terminal 29 device and a plurality of student terminals 31 devices”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the group index of Sun with the discussion groups of Du modified by Walton in order to allow the educator to easily join and observe any group (Sun, fig. 7; par. 0149).
Regarding Claim 9, Du modified by Sun further discloses the first user device determines to activate a discussion space and assign at least the second user device of any group to enter the discussion space for discussion (Sun, par. 0149: “When all student actors have enrolled, the teacher actor 41 will group them from his/her teacher terminal 29 using the teacher user interface provided by the VR application by interacting with the use case Group Students;” par. 0110: “a teacher terminal 29 device and a plurality of student terminals 31 devices”).
The combination of the virtual classroom of Du with the groups and group indices of Sun above for Claim 8 would have included the group assignment.
Regarding Claim 18, modified Du does not disclose a discussion mode with grouped users. However, Walton discloses the first user device determines to activate a discussion teaching mode to group at least the second user device for discussion (par. 0043: “although multiple sections of the same class can be arranged to run concurrently, but separately. By integrating the speech technology, and facilitating small group interaction, greater access to the discussion is created, with the least amount of frustration;” par. 0039: “an environment where the learners feel more comfortable expressing their ideas, which can increase their motivation to participate substantively in class discussions;” fig. 1: educator’s computer 30 and learner’s computer 20).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the grouped discussion mode of Walton with the virtual classroom of Du in order to encourage conversation amongst all students and to make them more comfortable while doing so—since students are well-known to feel more comfortable talking in small groups rather than in front of the entire classroom (Walton, pars. 0039-0043).
Modified Du does not explicitly disclose a group index. However, Sun discloses the first user device determines to enter any group according to a group index (fig. 7: first user is able to join any group by group index, which in this example is 1-2; par. 0110: “a teacher terminal 29 device and a plurality of student terminals 31 devices”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the group index of Sun with the discussion groups of modified Walton in order to allow the educator to easily join and observe any group (Sun, fig. 7; par. 0149).
Regarding Claim 19, Du modified by Sun further discloses the first user device determines to activate a discussion space and assigns at least the second user device of any group to enter the discussion space for discussion (Sun, par. 0149: “When all student actors have enrolled, the teacher actor 41 will group them from his/her teacher terminal 29 using the teacher user interface provided by the VR application by interacting with the use case Group Students;” par. 0110: “a teacher terminal 29 device and a plurality of student terminals 31 devices”).
The combination of the virtual classroom of Du with the groups and group indices of Sun above for Claim 18 would have included this group assignment.
Regarding Claim 20, Du modified by Walton further discloses the real-time video module and the audio module are configured to conduct video conferencing discussion in the discussion teaching mode (Walton, par. 0016: “The educator can also facilitate an interactive discussion with learners;” par. 0031: “the option to select the discussion and interaction content account 1003, which allows the educator 30 to interact with learners 20… via… video conferencing”).
The combination of the virtual classroom of Du with the discussion mode and groups of Walton described above for Claim 18 would have included this video conferencing.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Du, Yerli, and Field as applied to claim 1 above, and further in view of Bader-Natal.
Regarding Claim 10, modified Du does not disclose a discussion space the second user can initiate. However, Bader-Natal discloses initiating, by at least the second user device, a discussion space and entering the discussion space for discussion, before the first user device activates the class teaching mode (fig. 5; par. 0125: “Once the participant has selected the class initialization graphic 401, the participant is taken to a pre class user interface such as shown in FIG. 5. In this embodiment, video thumbnails of other participants who have logged in to the classroom are displayed within a pre-class discussion region 501. A set of tools 502 are also provided to allow users to text one another, open personal video chat sessions, etc.;” fig. 19; par. 0076: “The clients may comprise any form of end user devices including desktop/laptop computers (e.g., PCs or Macs), smartphones (e.g., iPhones, Android phones, etc), tablets (e.g., iPads, Galaxy Tablets, etc), and/or wearable devices (e.g., smartwatches such as the iWatch or Samsung Gear watch)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the initiated discussion space of Bader-Natal with the virtual classroom of Du in order to encourage further discussion between students and increase the realism of the virtual environment (Bader-Natal, abstract; fig. 7; par. 0125).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIE DOSHER whose telephone number is (571) 272-4842. The examiner can normally be reached Monday - Friday, 10 a.m. - 6 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.G.D./Examiner, Art Unit 3715
/DMITRY SUHOL/Supervisory Patent Examiner, Art Unit 3715