DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the Request for Continued Examination filed on 02/12/2026.
Claims 1, 7, 13, and 23 have been amended and are hereby entered.
Claims 24-27 have been added.
Claims 3, 9, 15, and 21 have been canceled.
Claims 1-2, 4-8, 10-14, 19-20, and 22-27 are currently pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/12/2026 has been entered.
Response to Arguments
Applicant’s arguments, see Pages 8-12, filed 02/12/2026, with respect to the 35 U.S.C. 101 rejections of claims 1-15 and 19-23 have been fully considered but are not persuasive. The 35 U.S.C. 101 rejections of claims 1-15 and 19-23 have been maintained.
After summarizing the steps of eligibility analysis on pages 8-9, Applicant discusses that a claim may be eligible when directed to an improvement in technology. Applicant discusses the Desjardins case and the 12/05/25 memo regarding the decision. Finally, Applicant notes from the 08/04/2025 Memo that a rejection should be made only if it is more likely than not that the claim is ineligible on pages 9-10.
Regarding the claims at issue, Applicant argues that the amended independent claims now recite video signals representing video communications and extracting information from video signals, which Applicant argues should be treated as additional elements. Examiner agrees, and the eligibility analysis below determines that the reception of video signals and the extraction of information from video signals are additional elements at Step 2A Prong Two.
Applicant argues on pages 10-12 that the features of the amended claims, including the receipt of video signals and the extraction of data from the video signals, reflect a technical improvement. Applicant particularly argues that screen space constraints are the technical problem being solved for in the distributed learning environment. Applicant argues that the invention mitigates the additional restrictions in the ability of a teacher to monitor a distributed class compared to an in-person class. Particularly, Applicant argues that the “amendments to the claims (further incorporating video signals) further evidence and reflect such an improvement. For example, Applicant submits that it is not possible to monitor video from the multiple students in the distributed environment with even limited ability, in view of screen space constraints. This technological problem is further reflected in the recitation of video signals, and extraction of indicators therefrom”. Examiner respectfully disagrees.
First, Applicant’s arguments center on the screen size limitations of observing multiple students in a distributed classroom environment. Examiner notes that the amended claims explicitly recite the collection of information regarding “one or more students” (emphasis added). Accordingly, Applicant’s screen size arguments to not appear to be applicable to the claimed embodiment of one student.
Regardless of the particular number of students in a class using the claimed invention, the claimed invention still does not provide an improvement to technology. The educational effectiveness indicators recited in the claims “include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process” (amended claim 1). To read on the extraction of the educational effectiveness indicators, a count of the number of times each student speaks can be recorded during the class. While the teacher of the class may be focused on actually teaching of the class, the recording the number of times each student speaks in class could be performed by a teaching aide(s)/assistant(s)/proctor(s), for example by making a tally mark next to a student’s name on a class roster each time the student spoke. An aide of the distributed class environment noting which students’ cameras are off during a lesson is another example for the extraction of educational effectiveness indicators, and the other listed indicators can also be recorded by a human, even if the teacher themselves is not the one doing the recording. This is again similar to the limitations of a teacher in an in-person classroom being unable to monitor each and every student and interaction but being assisted by teaching aides. Therefore, instead of providing a technical improvement to address small screen size, the claimed invention is applying the judicial exception of monitoring/managing student behavior in a class using generic computer components to perform the tasks able to be performed by a human teaching aide(s). The computing components are being used as a tool to gather and analyze information that is part of the judicial exception (managing interactions of a class). See MPEP 2106.05(f)(2) for computing elements being used as tools being indicative of the additional elements being mere instructions to apply an exception.
As discussed previously, this automated analysis of an educational process does not address the alleged technical problem of a small screen size, as the instructor still has limited ability to directly monitor students during a class. Recording and analyzing audio and video data does not impact the screen size of the instructor or make the smaller screen size less of a burden. The claimed invention consolidates/summarizes information gleaned from student behavior and presents the information to the teacher. Instead of being a technical improvement allowing the teacher to directly observe more students/perform more detailed observation on a small screen size, the claimed invention summarizes information that may be outside a teacher’s field of view and presents the summarized information to them. Instead of being a technical improvement, this presentation of indicators is analogous to a teacher receiving feedback from an aide regarding a group of students that the teacher could not directly observe during class.
Regarding Applicant’s argument that the measurement of indicators “accurately, in detail, and reliably” (Page 11 of Remarks) and providing reports at varying levels of granularity also provides a technical solution. Examiner notes that reporting out information at differing levels of granularity (individual, class, department, school, etc.) is part of the abstract idea. Specifically, generating a report about a particular class or department is abstract data manipulation/filtering to only include data associated with a desired individual/class/school. As discussed above, the effectiveness indicators covered in the claimed invention include such indicators as a count of the number of times a student speaks during class. While a teacher may be focused on the lesson instead of tracking the number of times a student speaks, a human observer could tally up the number of times a student spoke in a class as discussed above. A human could consolidate/manipulate these tallies as desired to arrive at an individual/class/department/school level report. Therefore, instead of a technical improvement, the claimed invention is applying an abstract idea using generic computing components.
Applicant appears to argue on pages 11-12 that the additional elements were dismissed as generic computing components without consideration as to whether the elements confer a technological improvement. Examiner respectfully disagrees. As discussed above, the additional elements of the claims have been considered as a whole. The additional elements, as a whole, perform to functions of a human teaching aide(s) in monitoring students and reporting back to the teacher of the class regarding student engagement that the teacher may have missed. MPEP 2106.05(a) I. states that “Mere automation of manual processes” are examples that may not be sufficient to show an improvement. Furthermore, MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”. In the context of the present invention, the computing components receive and process audio and video signals to perform the abstract idea of monitoring student behavior, and further perform the abstract idea of accessing demographic data and generating reports at a specified level of granularity based on the monitored behavior and demographic data. Accordingly, even after Desjardins, the additional elements still amount to no more than instructions to apply the judicial exception using generic computing components.
Therefore, the amended claims do not provide a technical solution to a technical problem of a limited screen size for an instructor. Instead, the additional elements argued by Applicant, in combination, amount to no more than mere instructions to apply the exception and Step 2A Prong 2 and Step 2B. Accordingly, the amended claims are not patent eligible, and Applicant’s arguments are not persuasive.
Applicant’s arguments, see Pages 12-14, filed 02/12/2026, with respect to the 35 U.S.C. 103 rejection of claim 1 have been fully considered but are not persuasive. The 35 U.S.C. 103 rejection of claim 1 has been maintained.
After summarizing the rejections and reproducing claim 1 across pages 11-12, Applicant argues that the combination of Nelson and Peters does not teach the newly amended limitations of "receiv[ing], from a communication device associated with the educator, a selection of a level of granularity... and generat[ing] a plurality of reports... corresponding to the selected level of granularity". Applicant notes that Nelson’s cited reporting displays gender equity metrics, and further argues that demographic data and level of granularity are different features in Applicant’s disclosure. Applicant argues that Nelson does not teach a selection of a level of granularity, and that Peters does not remedy this alleged deficiency. Examiner respectfully disagrees.
First, Examiner acknowledges that [0089] of Applicant’s specification treats demographics and “level of granularity” as different criteria as Applicant argues. However, even treating demographic and granularity separately, Nelson still teaches the limitations argued by Applicant.
Particularly, while Applicant cites to examples of granularity of class level, degree program level, etc. in the Remarks, Examiner notes that Applicant’s specification explicitly considers the individual participant and a particular meeting/class as their own levels of granularity (see Applicant specification [0090] “process 300 can generate reports indicative of engagement at various levels of granularity (e.g., for individual participants, for a particular meeting/class…”). Regarding Nelson, Examiner points to the “Feedback Frames” in Figure 17 and [0082] “As shown in FIG. 17, the result summary page 44 may include topic headings to navigate the user to the metrics relating to the discussion. These topics may include group analytics… individual participant analytics” regarding the selection. Examiner points to the “Group Analytics” in Figs. 18-19, “Gender Equity” in Fig. 22, and the “Participant Analytics” in Fig. 28. In particular, Nelson teaches that the user (who is a teacher per Nelson [0061]), uses the interface in Fig. 17 to view either various group/class session level analytics like in at least Figs. 18-19, and 22 or individual level analytics like in Fig. 28. Examiner notes the back arrow to the “Feedback Frames” in the top left of each of Figs. 18-19, 22, and 28. Therefore, Nelson teaches receiving a selection of a level of granularity that is an exemplary level of granularity in Applicant’s disclosure [0090] (individual or class session level).
The teacher selection at Fig. 17 to navigate to among various class session level metrics or individual level metrics results in the reports of at least Figs. 18-19, 22, and 28 being displayed. Regarding the reporting corresponding to the granularity level and the demographic information, Applicant indicates on page 13 of Remarks that the “Gender Equity” analytics are based on the demographic information of the students, and the selection of the class session level report in Fig 17 means that the Gender Equity analytics are at the specified level of granularity of a class session level. Also see the meeting time/date/attendance in the top left corner of the Gender Equity report that indicates the particular class session the report is for. As for the individual granularity level, Fig. 28 shows an icon for the particular participant. The icon is determined based on the gender of the particular student per Nelson [0065]. Therefore, both the group level and individual level analytics are reported based on the specified level of granularity from the teacher and based on the demographic information. Accordingly, Nelson teaches the limitations at issue in Applicant’s arguments. Applicant’s arguments against the combination of Nelson and Peters regarding claim 1 are therefore unpersuasive.
Applicant’s arguments on Page 14 regarding independent claims 7 and 13 are unpersuasive for similar reasoning discussed above regarding claim 1. Applicant’s arguments that Peters, Christ, and Bixler do not teach the limitations at issue are moot because Nelson already teaches the limitations. Applicant’s arguments that claims 2-6, 9-12, 14-15, and 19-23 are distinct from the cited references by virtue of their dependency on their respective independent claims are not persuasive as the independent claims are taught by the cited art.
Examiner notes that new claims 24-27 are taught in the prior art as shown below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-8, 10-14, 19-20, and 22-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite determining the effectiveness of an educational process.
As an initial matter, claims 1-2, 4-6, 19-20, and 24-26 fall into at least the machine category of statutory subject matter. Claims 7-8, 10-12, 22, and 27 fall into at least the process category of statutory subject matter. Finally, claims 13-14 and 23 fall into at least the manufacture category of statutory subject matter. Therefore, all claims fall into at least one of the statutory categories. Eligibility analysis proceeds to Step 2A.
Claim 1 recites the concept of determining the effectiveness of an educational process which is a certain method of organizing human activity including Managing Personal Behavior or Relationships or Interactions Between People. Managing education processes in a distributed education environment, comprising: receive from a plurality of one or more students, information about a live educational process being experienced in a distributed education environment where at least an educator is remotely located from the one or more students and the distributed education environment facilitates communication between the educator and the one or more students including at least audio communications; receive, from the educator, a selection of a level of granularity; associate identifying information with components of the audio communications and of the video communications; extract one or more educational effectiveness indicators from at least the audio and an operation of the distributed education environment during the live educational process, wherein the one or more educational effectiveness indicators include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process; access demographic information about the one or more students and correlate the demographic information with the one or more students; and generate a plurality of reports about individual students of the one or more students and groups within the one or more students using the one or more educational effectiveness indicators and the demographic information, corresponding to the selected level of granularity all, as a whole, fall under the category of Managing Personal Behavior or Relationships or Interactions Between People. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a system, a computer system, at least one processor, a plurality of communication devices respectively associated with one or more students, receiving and extracting indicators from audio signals, receiving and extracting indicators from video signals, a communication device associated with the educator, and at least one database. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Further, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a system, a computer system, at least one processor, a plurality of communication devices respectively associated with one or more students, receiving and extracting indicators from audio signals, receiving and extracting indicators from video signals, a communication device associated with the educator, and at least one database amounts to no more than mere instructions to apply the exception using generic computer components. Also as discussed above, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claims 2 and 4 further limit the abstract idea of claim 1 without adding any new additional elements. Therefore, by the analysis of claim 1 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible.
Claim 5 further limits the abstract idea of claim 4 while introducing the additional element of a registration database. The claim does not integrate the abstract idea into a practical application because the element of a registration database is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 4 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 6 further limits the abstract idea of claim 4 while introducing the additional element of a registration database. The claim does not integrate the abstract idea into a practical application because the element of a registration database is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 4 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 7 recites the concept of determining the effectiveness of an educational process which is a certain method of organizing human activity including Managing Personal Behavior or Relationships or Interactions Between People. A method for managing education processes in a distributed education environment, comprising: receiving, from a plurality of one or more students, information about a live educational process being experienced in a distributed education environment where at least an educator is remotely located from the one or more students and the distributed education environment facilitates communication between the educator and the one or more students including at least audio communications; receiving, from the educator, a selection of a level of granularity; associating identifying information of the plurality of communication devices with components of the audio communications and of the video communications; extracting one or more educational effectiveness indicators from at least the audio and an operation of the distributed education environment during the live educational process, wherein the one or more educational effectiveness indicators include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process; accessing demographic information about the one or more students and correlating the demographic information with the one or more students; and generating a plurality of reports about individual students of the one or more students and groups within the one or more students using the one or more educational effectiveness indicators and the demographic information, corresponding to the selected level of granularity all, as a whole, fall under the category of Managing Personal Behavior or Relationships or Interactions Between People. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a plurality of communication devices respectively associated with one or more students, receiving and extracting indicators from audio signals, receiving and extracting indicators from video signals, a communication device associated with the educator, and at least one database. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Further, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a plurality of communication devices respectively associated with one or more students, receiving and extracting indicators from audio signals, receiving and extracting indicators from video signals, a communication device associated with the educator, and at least one database amounts to no more than mere instructions to apply the exception using generic computer components. Also as discussed above, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claims 8 and 10 further limit the abstract idea of claim 7 without adding any new additional elements. Therefore, by the analysis of claim 7 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible.
Claim 11 further limits the abstract idea of claim 10 while introducing the additional element of a registration database. The claim does not integrate the abstract idea into a practical application because the element of a registration database is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 10 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 12 further limits the abstract idea of claim 10 while introducing the additional element of a registration database. The claim does not integrate the abstract idea into a practical application because the element of a registration database is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 10 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 13 recites the concept of determining the effectiveness of an educational process which is a certain method of organizing human activity including Managing Personal Behavior or Relationships or Interactions Between People. A method for managing education processes in a distributed education environment, the method comprising: receiving information from a plurality of sources about a live educational process being experienced in a distributed education environment where at least an educator is remotely located from one or more students and the distributed education environment facilitates communication between the educator and the one or more students including at least audio communications and video communications; receiving a selection of a level of granularity from a source associated with the educator; extracting one or more educational effectiveness indicators from at least the audio communications, the video communications, and an operation of the distributed education environment during the live educational process, wherein the one or more educational effectiveness indicators include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process; accessing demographic information about the one or more students and correlating the demographic information with the one or more students; generating a plurality of reports about individual students of the one or more students and groups within the one or more students using the one or more educational effectiveness indicators and the demographic information, corresponding to the selected level of granularity; receiving information from the plurality of sources about a plurality of live educational processes across an educational institution being experienced in the distributed education environment; and aggregating one or more educational effectiveness indicators and the plurality of reports across the plurality of live educational processes, wherein the demographic information includes the educational institution or a part of the educational institution all, as a whole, fall under the category of Managing Personal Behavior or Relationships or Interactions Between People. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a non-transitory computer readable medium containing computer executable instructions, a processor, at least one database, and a registration database. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Further, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a non-transitory computer readable medium containing computer executable instructions, a processor, at least one database, and a registration database amounts to no more than mere instructions to apply the exception using generic computer components. Also as discussed above, the additional element of the communications being “real-time” also amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 14 further limits the abstract idea of claim 13 without adding any new additional elements. Therefore, by the analysis of claim 13 above, claim 14 does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claims 19-20 further limit the abstract idea of claim 1 without adding any new additional elements. Therefore, by the analysis of claim 1 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible.
Claims 22 further limits the abstract idea of claim 7 without adding any new additional elements. Therefore, by the analysis of claim 7 above this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 23 further limits the abstract idea of claim 13 while introducing the additional element of extracting indicators from audio signals and video signals. The claim does not integrate the abstract idea into a practical application because the element of extracting indicators from audio signals and video signals is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 13 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 24 further limits the abstract idea of claim 1 without adding any new additional elements. Therefore, by the analysis of claim 1 above, claim 24 does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 25 further limits the abstract idea of claim 1 while introducing the additional element of associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications in real time or near real time. The claim does not integrate the abstract idea into a practical application because the element of associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications in real time or near real time is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 1 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 26 further limits the abstract idea of claim 1 while introducing the additional element of generating the plurality of reports in real time or near real time. The claim does not integrate the abstract idea into a practical application because the element of generating the plurality of reports in real time or near real time is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Adding this new additional element into the additional elements from claim 1 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 27 further limits the abstract idea of claim 7 while introducing the additional elements of associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications in real time or near real time and generating the plurality of reports in real time or near real time. The claim does not integrate the abstract idea into a practical application because the elements of associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications in real time or near real time and generating the plurality of reports in real time or near real time are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components. Adding these new additional elements into the additional elements from claim 7 still amounts to no more than mere instructions to apply the exception using generic computer components. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 7-8, 19-20, 22, and 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson (U.S. Pre-Grant Publication No. 2018/0033332, hereafter known as Nelson) in view of Peters et al. (U.S. Pre-Grant Publication No. 2021/0076002, hereafter known as Peters).
Regarding claim 1, Nelson teaches:
A system for managing education processes in a (see [0062] "the system 10 can a user interface 12 displaying a menu 14 where users may select an existing group 16 or create a new group" and [0099]-[0104], particularly [0100] for the system comprising a PC with a CPU with a memory storing instructions for execution to the CPU. See [0096] and [0064] and [0010] for the system managing a classroom environment)
receive, from a plurality of communication devices audio signals representing audio communications (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time. The system can include optional cameras, microphones, and/or audio speakers, among other recording and replay technologies" for receiving audio signals representing audio communications from a variety of communication devices of the classroom environment. See [0061] and [0065] for students and teachers being participants in the environment)
receive, from a communication device associated with the educator, a selection of a level of granularity (see [0082] “As shown in FIG. 17, the result summary page 44 may include topic headings to navigate the user to the metrics relating to the discussion. These topics may include group analytics…individual participant analytics”, [0084] “upon selecting a specific participant, the system can visually highlight 50 the selected participant's portion of the visual representation of participation in the discussion. FIG. 19 depicts a schematic of a user interface illustrating the screen in FIG. 18 with one of the participant results highlighted to illuminate that participant's level of participation in various charts appearing on the screen”, [0093] “FIG. 28 depicts a schematic of a user interface illustrating data for a single group participant 18. A menu in the left of the screen allows the user to view the data for other participants. The screen includes a field for taking notes about the participant 18”, and [0094] “the discussion of a group of participants can be compared to various other groups of different (or partially different) groups of participants in previously recorded discussions” for the selection of a group/class-level of granularity and a selection of an individual level of granularity by the user. See [0061] for the user of the device navigating based on granularity level being a teacher. Examiner notes that Applicant’s specification [0090] explicitly considers an individual and particular class/meeting as examples of levels of granularity)
extract one or more educational effectiveness indicators from at least the audio signals, (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time" for extracting the number of times, duration of speech from identified participants in the classroom. See [0076]-[0078] for the operation of the classroom environment including input from the moderator/teacher that there is a period of silence or chaos, multiple people speaking, small group discussions)
wherein the one or more educational effectiveness indicators include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time" for extracting the number of times, duration of speech from identified participants in the classroom. Examiner notes that only one of the indicators listed is required to teach the limitation as a whole)
access at least (see Claim 16 "wherein the controller is further configured to receive demographic information related to each of the plurality of participants, wherein the results summary includes a graphic visually representing the participation of a group of participants within a demographic" for receiving and correlating demographic information with participants. See [0065] for the receipt of gender data for each participant during group/class setup)
and generate a plurality of reports about individual students of the one or more students and groups within the one or more students using the one or more educational effectiveness indicators and the demographic information, corresponding to the selected level of granularity (see Fig. 22 and [0085] "The results summary can be configured to summarize any number of metrics. For example, FIG. 22 is a schematic of a user interface illustrating data relating to the gender equity in the group discussion, including a comparison of how often and how long females and males spoke" for a report on number of times spoken and time spoken broken out by gender at the group level. See Fig. 28 and [0093] "FIG. 28 depicts a schematic of a user interface illustrating data for a single group participant 18. A menu in the left of the screen allows the user to view the data for other participants. The screen includes a field for taking notes about the participant 18" for reports on individual participants' number of times spoken and duration of time spoken (at the individual level of granularity). Examiner also notes that the icon 18 displayed in the individual report in Fig. 28 is generated according to demographic information per [0065] “Further, the icon can be gender specific for females and males. As the individuals in the group are named, the software may create icons for them and display the name above the icon representing the participant 18. These participant icons may be displayed in on the user interface” so the individual reporting is also done according to demographic information)
As discussed above, Nelson teaches the speaking occurrence and duration data being collected in an in-person classroom environment, and while Nelson contemplates the utility of its invention in a video class meeting in [0013], Nelson does not explicitly teach the distributed classroom environment in which a teacher is remotely located from students who each have communication devices associated with them. Nelson accordingly does not explicitly teach the reception of video signals and the extraction of educational effectiveness indicators from the video signals. While Nelson teaches the receiving of demographic/gender data of the participants in the class, Nelson also does not explicitly teach the demographic data being received from at least one database. Nelson also does not explicitly teach associating identifying information of the communication devices with components of the audio communications and video communications. However, Peters teaches:
receive, from a plurality of communication devices respectively being associated with one or more students, information about a live educational process being experienced in a distributed education environment where at least an educator is remotely located from the one or more students and the distributed education environment facilitates real-time communication between the educator and the one or more students including at least audio signals representing audio communications and video signals representing video communications (see 0115] "The monitoring of emotion and feedback about emotion can be performed during remote interactions, shared-space interactions, or hybrid interactions having both local and remote participants...examples of remote interactions include various forms of video conferencing, such as...streamed lectures... Examples of shared-space interactions include in-class instruction in school" and [0145] "This information can be output to a teacher's device, for example, overlaid or incorporated into a video feed showing a class, with the emotional states of different students indicated near their faces. The same information can be provided in remote learning (e.g., electronic learning or e-learning) scenarios, where the emotional states and engagement of individuals are provided in association with each remote participant's video feed" for the education environment being distributed with students communicating with teachers over video feed and located remotely. See [0068] "Each of the endpoints 12a-f communicates a source of audio and/or video and transmits a resulting media stream to the moderator module 20", Fig. 3, [0081], and [0109] for each of the endpoint devices associated with each of the participants/students communicating a stream of video and audio signals to the moderator/teacher device. See [0008] and [0118] for the conferencing being performed in real-time)
associate identifying information of the plurality of communication devices with components of the audio communications and of the video communications (see [0084] “the analysis processor 30 is configured to derive a raw score for each participant endpoint 12a-f for each displayed characteristic relating to each participant's visual and audio media stream input 46”, [0085] “throughout the analysis processor 30, the audio input media stream is analyzed by audio recognition technology in order to detect individual speaking/participation time, keyword recognition, and intonation and tone which indicate certain characteristics of each participants collaborative status” and [0312]-[0318] creating and storing an emotional response profile that identifies a user and their endpoint devices and is associated with components/features of the audio and video communications like speaking time and keywords as well as facial expression)
extracting one or more educational effectiveness indicators from the audio and video signals in a distributed education environment (see [0072] “module 110a can determine a frequency and duration that the participant is speaking. Similarly, the module 110a can determine a frequency and duration that the participant is listening. The module 110b determines eye gaze direction of the participant and head position of the participant, allowing the module to determine a level of engagement of the participant at different times during the video conference. This information, with the information about when the user is speaking, can be used by the modules 110a, 110b to determine periods when the participant is actively listening (e.g., while looking toward the display showing the conference) and periods when the user is distracted and looking elsewhere” for educational effectiveness indicators of speaking time and gaze direction. Also see [0073])
access at least one database of demographic information about the one or more students and correlate the demographic information with the one or more students (see [0323] "Each data gathering element 1902 represents collection of some or all of the data dimensions shown in element 1904, such as a...demographic attributes (e.g., age, gender, ethnicity estimation)...geographic location (e.g., city, state, region, economic micro-zone), occupational or economic data (e.g., industry, income level, education level), and so on. Information may be captured from a user profile of the user" and [0204] "The system can store profile set or database of participant information" for user profiles of a database comprising demographic data being accessed for each of the students)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the distributed learning environment with remotely located students, each associated with communication devices, and associating the identity of the student/device with audio and video communications of Peters into the system of Nelson. As Peters states in [0005] “The system's ability to gauge and indicate the emotional and cognitive state of the participants as a group can be very valuable to a teacher, lecturer, entertainer, or other type of presenter… With a large audience, the presenter cannot reasonable read the emotional cues from each member of the audience. Detecting these cues is even more difficult with remote, device-based, interactions rather than in-person interactions… the system can provide a presenter or other user with information about the overall state of the audience which the presenter otherwise would not have. For example, the system can be used to assist teachers, especially as distance learning and remote educational interactions become more common. The system can provide feedback, during instruction, about the current emotions and engagement of the students in the class, allowing the teacher determine how well the instruction is being received and to better customize and tailor the instruction to meet students' needs”. As Peters states, distance learning is becoming more common. One of ordinary skill in the art would have recognized that the effectiveness evaluation capabilities of Peters would have aided teachers of Nelson who have large class sizes and have students who are at least a mix of in-person and remote learners. Therefore, it would have been obvious to incorporate these evaluation capabilities from Peters into Nelson to aid teachers in conducting productive classes, especially as remote and distance learning proliferates.
Regarding the demographic data being accessed from a database instead of received from a user, since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of receiving student demographic data from a database of Peters for the receiving of demographic data from a user of the system of Nelson.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Regarding claim 2, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. Nelson further teaches:
wherein the at least one processor is further programmed to: receive information from the plurality of communication devices about a plurality of live educational processes being experienced in the distributed education environment (see [0062] "the system can save a plurality of conversations previously recorded for each group…In addition, the system 10 can store and recall various information graphics for each conversation" for receiving information from a plurality of classes via the sources/communication devices taught in [0059]. The educational environment is distributed in the combination of Nelson and Peters)
and aggregate one or more educational effectiveness indicators and the plurality of reports across the plurality of live educational processes (see [0062] "the system 10 can store and recall various information graphics for each conversation, and information graphics summarizing multiple conversations for each group. For example, the system can compare the most recent conversation of a specific group to the average results from all of the previously recorded discussions for the specific group, a different group, or average of different groups" for aggregating the participation results of time spoken and number of times spoken as discussed in [0059] across multiple meetings of the group as an average)
Regarding claim 7, Nelson teaches:
A method for managing education processes in a (see [0059], [0065], and [0085] for receiving input from a plurality of sources including the number of times a student speaks, receiving demographic information, and reporting out the duration and number of times each student has spoken in the class)
Regarding the remaining limitations of claim 7, see the rejection of claim 1 above.
Regarding claim 8, the combination of Nelson and Peters teaches all of the limitations of claim 7 above. Regarding the limitations introduced in claim 8, see the rejection of claim 2 above.
Regarding claim 19, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. Nelson further teaches:
wherein the at least one processor is further programmed to: analyze the audio signals to generate a record of the live educational process, wherein the at least one processor is programmed to extract the one or more educational effectiveness indicators based on the record (see [0059] “the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time” for the system analyzing the audio signals to identify each speaker and extracting effectiveness indicators like the speaking time duration and number of times spoken of each speaker)
Regarding claim 20, the combination of Nelson and Peters teaches all of the limitations of claim 19 above. While Nelson teaches automatically recording the time each speaker is speaking in [0059], Nelson does not explicitly teach the record of the educational process including a transcript. However, Peters further teaches:
wherein the record includes a transcript (see [0313] “Various types of data can be collected for a communication session, such as (1) a transcript of the conversation (entire or key-word summary)…and (4) speaking times for participants”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the recording of a complete transcript of the educational process as taught by Peters in the system of Nelson, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Specifically, Peters explicitly teaches that a transcript can be recorded along with the recording of speaking times of each participant, which Nelson already teaches are being captured. Accordingly, one of ordinary skill in the art would have recognized that the capturing of the transcript alongside the speaking times of Nelson would have had predictable results and performed the same function as they had done separately.
Regarding claim 22, the combination of Nelson and Peters teaches all of the limitations of claim 7 above. Nelson teaches the generation of a plurality of reports as discussed above in Figs. 22, 28 and [0085] and [0093]. However, Nelson’s reports in these Figures and paragraphs teach a summary of a particular communication session and overview of a particular participant. Nelson does not explicitly teach the reports indicating a trend in educational effectiveness indicators over time. However, Peters further teaches:
wherein the plurality of reports includes an indication of a trend in the one or more educational effectiveness indicators over time (see [0262] for tracking trends in engagement during a session and providing warnings that the engagement levels will be undesirable within the next 5-10 minutes if trends continue. Also see [0272] for reports showing trends during a presentation and [0296] and [0378] for recognizing patterns over the course of multiple communication sessions)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the reports indicating trends in educational effectiveness indicators over time of Peters into the system of Nelson. As Peters states in [0262] “the system may detect a progression of the distribution of emotional or cognitive states from balanced among various categories toward a large cluster of low-engagement states, and can provide an alert or warning that the audience may reach an undesirable distribution or engagement level in the next 5 minutes if the trend continues” and [0272] “As the communication session proceeds, those curves are extended, allowing the presenter to see the changes over time and the trends in emotional or cognitive states among the audience…As a result, the user interface 1600 can show how the audience is responding to, and has responded to, different content of the communication session”. One of ordinary skill in the art would have recognized that the incorporation of the Peters reports into Nelson would have allowed a presenter/teacher in the combined system to have advance warning of flagging engagement and be able to alter the lessen/session to include more features that are shown to increase engagement. Thus, the combined system would allow teachers/presenters to incorporate immediate feedback into the session to keep engagement higher than in Nelson alone.
Regarding claim 24, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. As discussed above regarding claim 20, Nelson does not explicitly teach generating a transcript. Accordingly, Nelson also does not explicitly tech the limitations of claim 24 of generating a transcript of at least a portion of the live educational process and associating the identifying information of the plurality of communication devices with text in the transcript. However, Peters further teaches:
wherein associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications includes: generating a transcript of at least a portion of the live educational process (see [0313] “Various types of data can be collected for a communication session, such as (1) a transcript of the conversation (entire or key-word summary), (2) facial expression data, emotional responses, cognitive attributes, etc., (3) voice stress analysis, and (4) speaking times for participants” for the generation of a transcript as part of the analysis of audio and visual data)
and associating the identifying information of the plurality of communication devices with text in the transcript (see [0323] “Elements 1902a-1902n represent the facial and/or emotion data gathered for n different individuals during a communication session. Each data gathering element 1902 represents collection of some or all of the data dimensions shown in element 1904, such as a transcript (e.g., at least key word or topic)… In general, the collected data can include, for example, speaking times, words (e.g., a full transcript or keyword summary)” for the association of words spoken in the transcript to their corresponding individual n. See [0084]-[0085] above for identifying the individual based on the data stream from their respective device)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the recording of a complete transcript of the educational process as taught by Peters in the system of Nelson, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Specifically, Peters explicitly teaches that a transcript can be recorded along with the recording of speaking times of each participant, which Nelson already teaches are being captured. Accordingly, one of ordinary skill in the art would have recognized that the capturing of the transcript alongside the speaking times of Nelson would have had predictable results and performed the same function as they had done separately.
Regarding claim 25, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. Nelson further teaches:
wherein associating the identifying information of the plurality of communication devices with components of the audio communications (see [0059] “The systems and methods for recording, documenting, and visualizing group discussions allows at least one user to quickly and easily input information relating to the conversation in real time and view the input information in an easy-to-follow visualization in real time… the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time”)
As discussed above, Nelson does not explicitly teach video signals from remote participants. Nelson therefore does not explicitly teach associating identification information of the communication devices with components of the video communications in real-time. However, Peters further teaches:
wherein associating the identifying information of the plurality of communication devices with components of the audio communications and of the video communications is performed in real-time or near real-time with regard to the live educational process (see [0066] “the participation of each endpoint conference participant is actively reviewed in real time by way of facial and audio recognition technology”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate processing video communications of the distributed learning environment in real time of Peters into the system of Nelson. As Peters states in [0005] “The system's ability to gauge and indicate the emotional and cognitive state of the participants as a group can be very valuable to a teacher, lecturer, entertainer, or other type of presenter… With a large audience, the presenter cannot reasonable read the emotional cues from each member of the audience. Detecting these cues is even more difficult with remote, device-based, interactions rather than in-person interactions… the system can provide a presenter or other user with information about the overall state of the audience which the presenter otherwise would not have. For example, the system can be used to assist teachers, especially as distance learning and remote educational interactions become more common. The system can provide feedback, during instruction, about the current emotions and engagement of the students in the class, allowing the teacher determine how well the instruction is being received and to better customize and tailor the instruction to meet students' needs”. As Peters states, distance learning is becoming more common. One of ordinary skill in the art would have recognized that the video communication capabilities of Peters would have aided teachers of Nelson who have students who are at least a mix of in-person and remote learners. As Peters states in [0012], video and audio analysis can be used together to evaluate participation and engagement of students. Therefore, it would have been obvious to incorporate video communication capabilities from Peters into Nelson to aid teachers in conducting productive classes, especially as remote and distance learning proliferates.
Regarding claim 26, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. While Nelson explicitly teaches the gathering of data regarding individual participants in real time in [0059], Nelson does not explicitly teach the reports are generated in real time or near real time with regards to the live educational process. However, Peters further teaches:
wherein generating the plurality of reports is performed in real-time or near real-time with regard to the live educational process (see [0066]-[0067] and [0110] “an embodiment of the system can include endpoints or participant devices that communicate with one or more servers to perform analysis of participants' emotions, engagement, participation, attention, and so on, and deliver indications of the analysis results, e.g., in real-time along with video conference data or other communication session data and/or through other channels, such as in reports, dashboards, visualizations (e.g., charts, graphs, etc.)”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate generation of reports of the distributed learning environment in real time of Peters into the system of Nelson. As Peters states in [0005] “The system can provide feedback, during instruction, about the current emotions and engagement of the students in the class, allowing the teacher determine how well the instruction is being received and to better customize and tailor the instruction to meet students' needs”. Therefore, by providing feedback to an instructor in real time while a class is ongoing, the combined system would allow a teacher to adapt their lesson on the fly to better engage students.
Regarding claim 27, the combination of Nelson and Peters teaches all of the limitations of claim 7 above. Regarding the limitations introduced in claim 27, the claim requires that at least one of the processes introduced in claim 25 (associating the identifying information of communication devices with components of audio and video communications in real time or near real time) or in claim 26 (generating reports in real time or near real time) be performed. Accordingly, regarding the limitations introduced in claim 27, see the rejections of claims 25 and 26 above.
Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson in view of Peters and Christ et al. (U.S. Pre-Grant Publication No. 2020/0020242, hereafter known as Christ).
Regarding claim 4, the combination of Nelson and Peters teaches all of the limitations of claim 1 above. Nelson further teaches:
wherein the at least one processor is further programmed to: receive information from the plurality of communication devices about a plurality of live educational processes (see [0062] "the system can save a plurality of conversations previously recorded for each group…In addition, the system 10 can store and recall various information graphics for each conversation...the system can compare the most recent conversation of a specific group to the average results from all of the previously recorded discussions for the specific group, a different group, or average of different groups" for receiving information from a plurality of class sessions, and from a variety of different class groups, via the sources taught in [0059])
While Nelson teaches aggregating effectiveness indicators across multiple class sessions of a group as discussed above regarding claim 2, Nelson does not explicitly teach receiving information across classes of an educational institution and aggregating these indicators across the classes of an educational institution. While Peters implies the use of classroom monitoring across a school, college, or university in [0154] and [0183], Peters likewise does not explicitly teach aggregating effectiveness indicators from classes across an educational institution. Christ teaches:
receive information from the plurality of communication devices about a plurality of live educational processes across an educational institution…and aggregate one or more educational effectiveness indicators and the plurality of reports across the plurality of live educational processes (see Fig. 4 and [0079] "referring to FIGS. 4-5, an example group screening report 400 is shown. The group screening report 400 includes a section 404 that includes various summary information about a group of students, such as students in a particular school, school district... The group screening report 400 also includes a control 402 that allows the user to modify the demographics of the students shown in the report 400" for a report that aggregates individual student information of Fig. 3 and [0072] across an entire school or school district. In combination with Nelson, the individual and class level data of Nelson can be aggregated to present school-wide results)
One of ordinary skill in the art would have recognized that applying the known technique of aggregating student assessment information across an educational institution of Christ to the combination of Nelson and Peters would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Christ to the teaching of the combination of Nelson and Peters would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such aggregating student assessment information across an educational institution. Further, applying aggregating student assessment information across an educational institution to the combination of Nelson and Peters would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more efficient presentation of lesson participation indicators to school administrators. As Peters teaches in [0387], administrators can access parameters from a set of class sessions to determine how students respond to different factors during class, evaluate teacher performance, and how to best reach students. By incorporating the school-wide aggregation and reporting of Christ, the administrator of Peters can view data for their entire school to perform teacher evaluation and educational effectiveness analysis without needing to select various groups of students as in Peters. One of ordinary skill in the art would have recognized that school-level aggregation would have had predictable results while providing this ease of use improvement to the administrator.
Regarding claim 10, the combination of Nelson and Peters teaches all of the limitations of claim 7 above. Regarding the limitations introduced in claim 10, see the rejection of claim 4 above.
Claims 5-6, 11-14, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson in view of Peters, Christ, and Bixler et al. (U.S. Patent No. 11,805,130; hereafter known as Bixler).
Regarding claim 5, the combination of Nelson, Peters, and Christ teaches all of the limitations of claim 4 above. The database comprising student demographic information taught in Peters is a database of the video conference management system and is not explicitly taught as a registration database of the educational institution. Accordingly, the combination of Nelson, Peters, and Christ does not explicitly teach the database with demographic information being a registration database of the educational institution. Bixler teaches:
wherein the at least one database of demographic information includes a registration database of the educational institution (see Col. 10 lines 19-29 “the student database 34 may store electronic student profiles for each student associated with a particular university… In particular, the student profile may include fields for… a norm group identification for which subset of a population the student belongs” and Col. 19 lines 10-22 “a norm group is established for…each of a set of demographics, for each year in college, for each gender” for a student database of a particular university storing demographic information of students that can be accessed)
Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of the student demographic database being of the educational institution of Bixler for the student demographic database being of the video conference management system of the combination of Nelson, Peters, and Christ.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Regarding claim 6, the combination of Nelson, Peters, and Christ teaches all of the limitations of claim 4 above. The database comprising student demographic information taught in Peters is a database of the video conference management system and is not explicitly taught as a registration database of a part of the educational institution. Accordingly, the combination of Nelson, Peters, and Christ does not explicitly teach the database with demographic information being a registration database of a part of the educational institution. Bixler teaches:
wherein the at least one database of demographic information includes a registration database of part of the educational institution (see Col. 10 lines 19-29 and Col. 19 lines 10-22 citations above regarding claim 5, also see Col. 25 lines 23-28 “While the exemplary inventive student evaluation system 2 of FIG. 9 may be described with reference to universities, the exemplary inventive student evaluation system 2 may be equally applicable to students from, e.g.….university or college departments” for the database with demographic information of students being of a department (a part of) of a university/college)
Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of the student demographic database being of a department of the educational institution of Bixler for the student demographic database being of the video conference management system of the combination of Nelson, Peters, and Christ.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Regarding claim 11, the combination of Nelson, Peters, and Christ teaches all of the limitations of claim 10 above. Regarding the limitations introduced in claim 11, see the rejection of claim 5 above.
Regarding claim 12, the combination of Nelson, Peters, and Christ teaches all of the limitations of claim 10 above. Regarding the limitations introduced in claim 12, see the rejection of claim 6 above.
Regarding claim 13, Nelson teaches:
A non-transitory computer readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for managing education processes in a (see [0104] "Hence aspects of the systems and methods provided herein encompass hardware and software for controlling the relevant functions. Software may take the form of code or executable instructions for causing a controller or other programmable equipment to perform the relevant steps, where the code or instructions are carried by or otherwise embodied in a medium readable by the controller or other machine. Instructions or code for implementing such operations may be in the form of computer instruction in any form (e.g., source code, object code, interpreted code, etc.) stored in or carried by any tangible readable medium")
receiving information from a plurality of sources about a live educational process being experienced in a audio communications (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time. The system can include optional cameras, microphones, and/or audio speakers, among other recording and replay technologies" for receiving audio signals representing audio communications from a variety of communication devices of the classroom environment. See [0061] and [0065] for students and teachers being participants in the environment)
receiving a selection of a level of granularity from a source associated with the educator (see [0082] “As shown in FIG. 17, the result summary page 44 may include topic headings to navigate the user to the metrics relating to the discussion. These topics may include group analytics…individual participant analytics”, [0084] “upon selecting a specific participant, the system can visually highlight 50 the selected participant's portion of the visual representation of participation in the discussion. FIG. 19 depicts a schematic of a user interface illustrating the screen in FIG. 18 with one of the participant results highlighted to illuminate that participant's level of participation in various charts appearing on the screen”, [0093] “FIG. 28 depicts a schematic of a user interface illustrating data for a single group participant 18. A menu in the left of the screen allows the user to view the data for other participants. The screen includes a field for taking notes about the participant 18”, and [0094] “the discussion of a group of participants can be compared to various other groups of different (or partially different) groups of participants in previously recorded discussions” for the selection of a group/class-level of granularity and a selection of an individual level of granularity by the user. See [0061] for the user of the device navigating based on granularity level being a teacher. Examiner notes that Applicant’s specification [0090] explicitly considers an individual and particular class/meeting as examples of levels of granularity)
extracting one or more educational effectiveness indicators from at least the audio communications, (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time" for extracting the number of times, duration of speech from identified participants in the classroom. See [0076]-[0078] for the operation of the classroom environment including input from the moderator/teacher that there is a period of silence or chaos, multiple people speaking, small group discussions)
wherein the one or more educational effectiveness indicators include at least one of a number of audio communications by each of the one or more students during the live educational process, length of audio communications by each of the one or more students during the live educational process, number of audio interactions by each of the one or more students during the live educational process, an on/off status of respective cameras of each of the one or more students during the live educational process, or a gaze direction of each of the one or more students during the live educational process (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time" for extracting the number of times, duration of speech from identified participants in the classroom. Examiner notes that only one of the indicators listed is required to teach the limitation as a whole)
accessing at least (see Claim 16 "wherein the controller is further configured to receive demographic information related to each of the plurality of participants, wherein the results summary includes a graphic visually representing the participation of a group of participants within a demographic" for receiving and correlating demographic information with participants. See [0065] for the receipt of gender data for each participant during group/class setup)
generating a plurality of reports about individual students of the one or more students and groups within the one or more students using the one or more educational effectiveness indicators and the demographic information, corresponding to the selected level of granularity (see Fig. 22 and [0085] "The results summary can be configured to summarize any number of metrics. For example, FIG. 22 is a schematic of a user interface illustrating data relating to the gender equity in the group discussion, including a comparison of how often and how long females and males spoke" for a report on number of times spoken and time spoken broken out by gender at the group level. See Fig. 28 and [0093] "FIG. 28 depicts a schematic of a user interface illustrating data for a single group participant 18. A menu in the left of the screen allows the user to view the data for other participants. The screen includes a field for taking notes about the participant 18" for reports on individual participants' number of times spoken and duration of time spoken (at the individual level of granularity). Examiner also notes that the icon 18 displayed in the individual report in Fig. 28 is generated according to demographic information per [0065] “Further, the icon can be gender specific for females and males. As the individuals in the group are named, the software may create icons for them and display the name above the icon representing the participant 18. These participant icons may be displayed in on the user interface” so the individual reporting is also done according to demographic information)
receiving information from the plurality of sources about a plurality of live educational processes (see [0062] "the system can save a plurality of conversations previously recorded for each group…In addition, the system 10 can store and recall various information graphics for each conversation...the system can compare the most recent conversation of a specific group to the average results from all of the previously recorded discussions for the specific group, a different group, or average of different groups" for receiving information from a plurality of class sessions, and from a variety of different class groups, via the sources taught in [0059])
As discussed above, Nelson teaches the speaking occurrence and duration data being collected in an in-person classroom environment, and while Nelson contemplates the utility of its invention in a video class meeting in [0013], Nelson does not explicitly teach the distributed classroom environment in which a teacher is remotely located from students who each have communication devices associated with them. Nelson accordingly does not explicitly teach the reception of video signals and the extraction of educational effectiveness indicators from the video signals. While Nelson teaches the receiving of demographic/gender data of the participants in the class, Nelson also does not explicitly teach the demographic data being received from at least one database. Finally, Nelson also does not explicitly teach aggregating educational effectiveness indicators and reports across an educational institution’s processes and that the database of demographic information includes a registration database of the educational institution. However, Peters teaches:
receiving information from a plurality of sources about a live educational process being experienced in a distributed education environment where at least an educator is remotely located from the one or more students and the distributed education environment facilitates real-time communication between the educator and the one or more students including at least audio communications and video communications (see 0115] "The monitoring of emotion and feedback about emotion can be performed during remote interactions, shared-space interactions, or hybrid interactions having both local and remote participants...examples of remote interactions include various forms of video conferencing, such as...streamed lectures... Examples of shared-space interactions include in-class instruction in school" and [0145] "This information can be output to a teacher's device, for example, overlaid or incorporated into a video feed showing a class, with the emotional states of different students indicated near their faces. The same information can be provided in remote learning (e.g., electronic learning or e-learning) scenarios, where the emotional states and engagement of individuals are provided in association with each remote participant's video feed" for the education environment being distributed with students communicating with teachers over video feed and located remotely. See [0068] "Each of the endpoints 12a-f communicates a source of audio and/or video and transmits a resulting media stream to the moderator module 20", Fig. 3, [0081], and [0109] for each of the endpoint devices associated with each of the participants/students communicating a stream of video and audio signals to the moderator/teacher device. See [0008] and [0118] for the conferencing being performed in real-time)
extracting one or more educational effectiveness indicators from the audio and video signals in a distributed education environment (see [0072] “module 110a can determine a frequency and duration that the participant is speaking. Similarly, the module 110a can determine a frequency and duration that the participant is listening. The module 110b determines eye gaze direction of the participant and head position of the participant, allowing the module to determine a level of engagement of the participant at different times during the video conference. This information, with the information about when the user is speaking, can be used by the modules 110a, 110b to determine periods when the participant is actively listening (e.g., while looking toward the display showing the conference) and periods when the user is distracted and looking elsewhere” for educational effectiveness indicators of speaking time and gaze direction. Also see [0073])
accessing at least one database of demographic information about the one or more students and correlate the demographic information with the one or more students (see [0323] "Each data gathering element 1902 represents collection of some or all of the data dimensions shown in element 1904, such as a...demographic attributes (e.g., age, gender, ethnicity estimation)...geographic location (e.g., city, state, region, economic micro-zone), occupational or economic data (e.g., industry, income level, education level), and so on. Information may be captured from a user profile of the user" and [0204] "The system can store profile set or database of participant information" for user profiles of a database comprising demographic data being accessed for each of the students)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the distributed learning environment with remotely located students, each associated with communication devices, and associating the identity of the student/device with audio and video communications of Peters into the system of Nelson. As Peters states in [0005] “The system's ability to gauge and indicate the emotional and cognitive state of the participants as a group can be very valuable to a teacher, lecturer, entertainer, or other type of presenter… With a large audience, the presenter cannot reasonable read the emotional cues from each member of the audience. Detecting these cues is even more difficult with remote, device-based, interactions rather than in-person interactions… the system can provide a presenter or other user with information about the overall state of the audience which the presenter otherwise would not have. For example, the system can be used to assist teachers, especially as distance learning and remote educational interactions become more common. The system can provide feedback, during instruction, about the current emotions and engagement of the students in the class, allowing the teacher determine how well the instruction is being received and to better customize and tailor the instruction to meet students' needs”. As Peters states, distance learning is becoming more common. One of ordinary skill in the art would have recognized that the effectiveness evaluation capabilities of Peters would have aided teachers of Nelson who have large class sizes and have students who are at least a mix of in-person and remote learners. Therefore, it would have been obvious to incorporate these evaluation capabilities from Peters into Nelson to aid teachers in conducting productive classes, especially as remote and distance learning proliferates.
Regarding the demographic data being accessed from a database instead of received from a user, since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of receiving student demographic data from a database of Peters for the receiving of demographic data from a user of the system of Nelson.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
While Nelson teaches aggregating effectiveness indicators across multiple class sessions of a group as discussed above regarding claim 2, Nelson does not explicitly teach receiving information across classes of an educational institution and aggregating these indicators across the classes of an educational institution. While Peters implies the use of classroom monitoring across a school, college, or university in [0154] and [0183], Peters likewise does not explicitly teach aggregating effectiveness indicators from classes across an educational institution. The combination of Nelson and Peters also does not explicitly teach that the database of demographic information includes a registration database of the educational institution. Christ teaches:
receiving information from the plurality of sources about a plurality of live educational processes across an educational institution…and aggregating one or more educational effectiveness indicators and the plurality of reports across the plurality of live educational processes (see Fig. 4 and [0079] "referring to FIGS. 4-5, an example group screening report 400 is shown. The group screening report 400 includes a section 404 that includes various summary information about a group of students, such as students in a particular school, school district... The group screening report 400 also includes a control 402 that allows the user to modify the demographics of the students shown in the report 400" for a report that aggregates individual student information of Fig. 3 and [0072] across an entire school or school district. In combination with Nelson, the individual and class level data of Nelson can be aggregated to present school-wide results)
One of ordinary skill in the art would have recognized that applying the known technique of aggregating student assessment information across an educational institution of Christ to the combination of Nelson and Peters would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Christ to the teaching of the combination of Nelson and Peters would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such aggregating student assessment information across an educational institution. Further, applying aggregating student assessment information across an educational institution to the combination of Nelson and Peters would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more efficient presentation of lesson participation indicators to school administrators. As Peters teaches in [0387], administrators can access parameters from a set of class sessions to determine how students respond to different factors during class, evaluate teacher performance, and how to best reach students. By incorporating the school-wide aggregation and reporting of Christ, the administrator of Peters can view data for their entire school to perform teacher evaluation and educational effectiveness analysis without needing to select various groups of students as in Peters. One of ordinary skill in the art would have recognized that school-level aggregation would have had predictable results while providing this ease of use improvement to the administrator.
The database comprising student demographic information taught in Peters is a database of the video conference management system and is not explicitly taught as a registration database of the educational institution. Accordingly, the combination of Nelson, Peters, and Christ does not explicitly teach the database with demographic information being a registration database of the educational institution. Bixler teaches:
wherein the at least one database of demographic information includes a registration database of the educational institution or a part of the educational institution (see Col. 10 lines 19-29 “the student database 34 may store electronic student profiles for each student associated with a particular university… In particular, the student profile may include fields for… a norm group identification for which subset of a population the student belongs” and Col. 19 lines 10-22 “a norm group is established for…each of a set of demographics, for each year in college, for each gender” for a student database of a particular university storing demographic information of students that can be accessed)
Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of the student demographic database being of the educational institution of Bixler for the student demographic database being of the video conference management system of the combination of Nelson, Peters, and Christ.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Regarding claim 14, the combination of Nelson, Peters, Christ, and Bixler teaches all of the limitations of claim 13 above. Nelson further teaches:
the method further comprising: receiving information from the plurality of sources about a plurality of live educational processes being experienced in the distributed education environment (see [0062] "the system can save a plurality of conversations previously recorded for each group…In addition, the system 10 can store and recall various information graphics for each conversation" for receiving information from a plurality of classes via the sources/communication devices taught in [0059]. The educational environment is distributed in the combination of Nelson, Peters, Christ, and Bixler)
and aggregating one or more educational effectiveness indicators and the plurality of reports across the plurality of live educational processes (see [0062] "the system 10 can store and recall various information graphics for each conversation, and information graphics summarizing multiple conversations for each group. For example, the system can compare the most recent conversation of a specific group to the average results from all of the previously recorded discussions for the specific group, a different group, or average of different groups" for aggregating the participation results of time spoken and number of times spoken as discussed in [0059] across multiple meetings of the group as an average)
Regarding claim 23, the combination of Nelson, Peters, Christ, and Bixler teaches all of the limitations of claim 13 above. Regarding the limitations introduced in claim 23, Nelson further teaches:
wherein extracting the one or more educational effectiveness indicators is performed on at least one audio signal including the audio communications (see [0059] "the system can automatically identify the different participants by voice, wherein the system can automatically record and log the duration of time spoken, number of times spoken, and/or content of each speaker in real time. The system can include optional cameras, microphones, and/or audio speakers, among other recording and replay technologies" for receiving audio signals representing audio communications from a variety of communication devices including microphones and recording devices that are then automatically analyzed to extract educational effectiveness indicators including the speaking duration of each speaker in a session)
While Nelson contemplates the utility of its invention in a video class meeting in [0013], Nelson does not explicitly teach the reception of video signals and the extraction of educational effectiveness indicators from the video signals. However Peters further teaches:
wherein extracting the one or more educational effectiveness indicators is performed on at least one audio signal including the audio communications and at least one video signal including the video communications (see [0072] “module 110a can determine a frequency and duration that the participant is speaking. Similarly, the module 110a can determine a frequency and duration that the participant is listening. The module 110b determines eye gaze direction of the participant and head position of the participant, allowing the module to determine a level of engagement of the participant at different times during the video conference. This information, with the information about when the user is speaking, can be used by the modules 110a, 110b to determine periods when the participant is actively listening (e.g., while looking toward the display showing the conference) and periods when the user is distracted and looking elsewhere” for educational effectiveness indicators of speaking time and gaze direction. Also see [0073])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate extracting educational effectiveness indicators from video communications of the distributed learning environment in real time of Peters into the system of Nelson. As Peters states in [0005] “The system's ability to gauge and indicate the emotional and cognitive state of the participants as a group can be very valuable to a teacher, lecturer, entertainer, or other type of presenter… With a large audience, the presenter cannot reasonable read the emotional cues from each member of the audience. Detecting these cues is even more difficult with remote, device-based, interactions rather than in-person interactions… the system can provide a presenter or other user with information about the overall state of the audience which the presenter otherwise would not have. For example, the system can be used to assist teachers, especially as distance learning and remote educational interactions become more common. The system can provide feedback, during instruction, about the current emotions and engagement of the students in the class, allowing the teacher determine how well the instruction is being received and to better customize and tailor the instruction to meet students' needs”. As Peters states, distance learning is becoming more common. One of ordinary skill in the art would have recognized that the video communication capabilities of Peters would have aided teachers of Nelson who have students who are at least a mix of in-person and remote learners. As Peters states in [0012], video and audio analysis can be used together to evaluate participation and engagement of students. Therefore, it would have been obvious to incorporate video communication capabilities from Peters into Nelson to aid teachers in conducting productive classes, especially as remote and distance learning proliferates.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Bai et al. (Chinese Publication No. 111967350) teaches monitoring a student’s gaze direction during a class
Elgart et al. (U.S. Pre-Grant Publication No. 2013/0231980) teaches an administrator requesting academic reports by grade level
Harris et al. (U.S. Pre-Grant Publication No. 2019/0385471) teaches accessing performance reporting data at an instructor and school level
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C MORONEY whose telephone number is (571)272-4403. The examiner can normally be reached Mon-Fri 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.C.M./Examiner, Art Unit 3628
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626