DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/10/2025 has been entered. Claims 1-3,6-9,11-13,16-21,23 and 25-27 are pending. Claims 4-5,10,14-15,22 and 24 have been cancelled.
Claim Objections
Claim 11 is objected to because of the following informalities:
Claim 11 recites a repeated term, i.e., “present, by the machine learning model, the educational challenge to the group of two or more students student through a student interface”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3,6-9,11-13,16-21,23 and 25-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1: Is the claimed invention a statutory category of invention?
Claims 1, 11 and 19 are directed to a method or system for generating a challenge (Step 1, Yes).
Step 2A, Prong 1: Does the claim recite an abstract idea?
The limitation of steps: … receiving social media data from a social media network for a plurality of students associated with an instructor; saving the social media data with demographic data in a student profile for each of the plurality of students in a non-volatile data repository; training a machine learning model to generate educational challenges based on profiles of the plurality of students; selecting a group of two or more students with diverse backgrounds from the plurality of students; generating, using the machine learning model, an educational challenge based on a curriculum goal provided by the instructor, wherein the educational challenge addresses the curriculum goal in a context that is relatable to all students in the group of two or more students; distributing, by the machine learning model, the educational challenge to the group of two or more students through a content delivery platform; in response to distributing the educational challenge to the group of two or more students, collecting feedback regarding the educational challenge from engagement with the educational challenge by a student in the group of two or more students; and fine-tuning the machine learning model based on the feedback regarding the educational challenge … as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components (Claim 11). This type of mental process can be practically performed in the human mind, such as a teacher and a mathematician for adjusting parameters in the machine learning model. The mere nominal recitation of at least one processor performing these steps does not take the claim limitation outside of the mental processes grouping. Thus, the claim recites a mental process (Step 2A, Prong 1: yes).
Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
Per the 2019 Revised Patent Subject Matter Eligibility Guidance, if a claim as a whole integrates the recited judicial exception into a practical application of that exception, a claim is not "directed to" a judicial exception. Alternatively, a claim that does not integrate a recited judicial exception into a practical application is directed to the exception. Evaluating whether a claim integrates an abstract idea into a practical application is performed by a) identifying whether there are any additional elements recited in the claim beyond the abstract idea, and b) evaluating those additional elements individual and in combination to determine whether they integrate the abstract idea into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. Exemplary considerations indicative that an additional element (or combination of elements) may have or has not been integrated into a practical application are set forth in the 2019 PEG.
With respect to the instant claims, claim 11 recites the additional elements of: at least one processor; at least one memory coupled to the at least one processor that includes instructions that, when executed by the at least one processor. Claims 1 and 19 does not require the claimed methods to be performed by any machine or statutory product. It is particularly noted that the use of a computing device "as a tool" to perform an abstract method and steps that only amount to extra solution activity are indicated in the 2019 PEG as examples that an additional element has not been integrated into a practical application. Even in combination, the recited additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits, such as an improvement to a computing system, on practicing the abstract idea (STEP 2A, Prong 2: NO).
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
The office determines whether the claim adds a specific limitation beyond the judicial exception that is not well-understood, routine, and conventional activity in the field, or, alternatively, whether the claim simply appends well-understood, routine, and conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Patent Eligibility Guidance, 84 Fed. Reg. at 56. Claim 11 recites the additional elements of: at least one processor; at least one memory coupled to the at least one processor that includes instructions that, when executed by the at least one processor. Claims 1 and 19 does not require the claimed methods to be performed by any machine or statutory product set forth above for Step 2A, Prong 2. Regarding these limitations: Applicant's specification only describes these features in a highly generic manner by stating that "A general-purpose processor can be a microprocessor, but in the alternative, the processor can be any processor, controller, microcontroller, or state machine" in the Applicant’s specification, page. 14, para. [0060]. There is no indication in the Specification that Applicants have achieved an advancement or improvement in computing technology. The recitation of “at least one processor; at least one memory coupled to the at least one processor” are recited at a high level of generality for generating a mathematical model. Dependent claims 2-3,6-9,12-13,16-18,20-21,23 and 25-27 inherit the deficiencies of their respective parent claims through their dependencies and do not recite additional limitations sufficient to direct the claims to more than the claimed abstract idea, and are thus rejected for the same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3,6-9,11-13,16-21,23 and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar et al. (US 2023/0055847 A1) in view of Essafi et al. (US 2017/0124894 A1).
Re claims 1, 11, 19:
Mallar teaches 1. A method (Mallar, Abstract), comprising:
training a machine learning model to generate educational challenges based on profiles of a plurality of students at an educational institution (Mallar, [0071], “a training module 227 configured to train the AI model”; [0033], “providing educational content”; [0056], “a learner metric”; [0054], “demographic data include school category, school tier, school location, grade level, learning goal, or combinations thereof. The term "learning goal" as used herein refers to a target outcome desired from the learning session … learning goals may include: studying for a particular grade (e.g., grade VIth, grade Xth, grade XIIt , and the like”; [0053]; [0065], “learner demographics”);
selecting a group of two or more students with diverse backgrounds from the plurality of students (Mallar, [0033], “the plurality of learners 12 may be located at different geographical locations while engaging in the online interactive learning session and may belong to the same or different demographics”; [0069], “find the right mix of learners from different groups to which they are currently assigned such that they complement each other's learning”; [0082]);
generating, using the machine learning model, an educational challenge based on a curriculum and a context or interest relevant to all students in the group of two or more students (Mallar, [0065], “learner group parameters and instructor group parameters include learner engagement scores, learner demographics, learner performance metrics, learner-instructor assistant rapport metrics and the like”; [0009], “The group optimizer is configured to dynamically reassign one or more learners of the plurality of learners to an optimized set of groups, based on an AI model, the plurality of learner features, the plurality of group parameters, and the learner engagement data”; [0069]);
presenting, by the machine learning model, the educational challenge to the group of two or more students through a student interface (Mallar, [0033], “the interaction session is aimed at providing educational content”; [0041], “Examples of written content include alpha-numeric text data, graphs, figures, scientific notations, gifs, and videos”);
in response to presenting the educational challenge to the group of two or more students, collecting feedback regarding the educational challenge from engagement with the educational challenge by a student in the group of two or more students (Mallar, fig. 7, 308 - “BASED ON AN AI MODEL, THE PLURALITY OF LEARNER FEATURES, THE PLURALITY OF GROUP PARAMETERS, AND THE LEARNER ENGAGEMENT DATA”; [0038], “The data module 210 is configured to access in-session data, post-session data, class data, and learner engagement data for the plurality of learners 12”; [0056], “The term "learner engagement data" as used herein refers to a learner metric that measures, in real-time, the engagement level of each learner of the plurality of learners attending the live learning session”); and
fine-tuning the machine learning model based on the feedback regarding the educational challenge (Mallar, [0056], “learner engagement score of each learner of the plurality of learners is used to optimize the AI model used to assign the plurality of learners to an optimized set of groups”; [0060], “the engagement score generator 223 is configured to generate, in-real-time, from a trained AI model”; [0071], “a training module 227 configured to train the AI model based on the learner engagement data, as described herein earlier … all the past data from the plurality of learners 12 is stored and used to train the AI model. The training module 227 may be further configured to train the AI model based on or more additional suitable data, not described herein. In some embodiments, the training module 227 is configured to train the AI model at defined intervals, e.g., weekly, bi-weekly, fortnightly, monthly, etc. In some other embodiments, the training module 227 is configured to train the AI model continuously in a dynamic manner”).
11. A system, comprising:
at least one processor;
at least one memory coupled to the at least one processor that includes instructions that, when executed by the at least one processor, cause the system to:
train a machine learning model to generate educational challenges based on profiles of a plurality of students at an educational institution;
select a group of two or more students with diverse backgrounds from the plurality of students;
generate, using the machine learning model, an educational challenge based on a curriculum goal provided by an instructor and a context or interest relevant to all students in the group of two or more students;
present, by the machine learning model, the educational challenge to the group of two or more students student through a student interface;
in response to presenting the educational challenge to the group of two or more students collect feedback regarding the educational challenge from engagement with the educational challenge by a student in the group of two or more students; and
fine-tune the machine learning model based on the feedback regarding the educational challenge (See claim 1 rejection above).
19. A method, comprising:
training a machine learning model to generate educational challenges based on profiles of the plurality of students;
selecting a group of two or more students with diverse backgrounds from the plurality of students;
generating, using the machine learning model, an educational challenge based on a curriculum goal provided by the instructor, wherein the educational challenge addresses the curriculum goal in a context that is relatable to all students in the group of two or more students;
distributing, by the machine learning model, the educational challenge to the group of two or more students through a content delivery platform;
in response to distributing the educational challenge to the group of two or more students, collecting feedback regarding the educational challenge from engagement with the educational challenge by a student in the group of two or more students; and
fine-tuning the machine learning model based on the feedback regarding the educational challenge (See claim 1 rejection above).
Mallar does not explicitly disclose generating, using the machine learning model, an educational challenge based on a curriculum goal provided by an instructor.
Essafi et al. (US 2017/0124894 A1) teaches systems and methods for education instrumentation can include one or more servers configured generate a plurality of models for modeling various aspects of an education process using training data related to academic performance of students (Essafi, Abstract). Essafi teaches generating, using the machine learning model, an educational challenge based on a curriculum goal provided by an instructor (Essafi, fig. 11; [0100], “a teacher plans lessons for 100% of the curriculum”; [0103], “which can be a deterministic algorithm, and process 2, which can include a machine learning process”; [0089], “The education instrumentation platform (e.g., as shown in FIG. 3) can provide one or more client applications running on client devices 102 to interact with the EI system 30”). Therefore, in view of Essafi, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system and method described in Mallar, by providing the curriculum goal (lesson plan) as taught by Essafi, in order to establish a plurality of learning objectives set forth by the teacher according to education standards (Essafi, [0080]; Abstract).
Re claim 19: Mallar does not explicitly disclose receiving social media data from a social media network for a plurality of students associated with an instructor.
Essafi teaches receiving social media data from a social media network for a plurality of students associated with an instructor; saving the social media data with demographic data in a student profile for each of the plurality of students in a non-volatile data repository; (Essafi, [0069], “The data collector 304 can receive other data, such as surveys (e.g. student surveys, community surveys, parents feedback), or other manually input data (e.g., spreadsheets), social media data from social networks, or other online data (e.g., from teacher or school rating websites)”). Therefore, in view of Essafi, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by providing the social media data as taught by Essafi, since Essafi suggests that the analysis module can analyze the collected data (e.g., the preprocessed data stored in database) to generate a plurality of models that represent one or more education processes. The generated models can include mathematical or statistical models that simulate the impact or effect of various factors or variables in the collected data on various learning outcomes … a generated model can illustrate the inter-dependencies between factors such as student behavior, student's social, cultural or economic background, parents' educational level, family structure, school size, school resources, class size, student-teacher ratio, teacher qualifications, extra-curriculum activities, and how these factors affect learning out come in math.
Re claims 2, 12:
Mallar does not explicitly disclose a social media data, nor disclose financial information. Essaifi teaches the missing features: 2. The method of claim 1, further comprising generating the profiles of the plurality of students, wherein generating the profiles of the plurality of students comprises: acquiring student information, teacher information, academic records, and financial information for the plurality of students from a school database of the educational institution; collecting social media data from a social media network for the plurality of students; and saving the student information, the teacher information, the academic records, the financial information, and the social media data with demographic data in a corresponding student profile for each student in a student profile database. 12. The system of claim 11, wherein the instructions further cause the system to: acquire student information, teacher information, academic records, and financial information for the plurality of students from a school database of the educational institution; collect social media data from a social media network for the plurality of students; save the student information, the teacher information, the academic records, the financial information, and the social media data with demographic data in a corresponding student profile for each student in a student profile database (Essafi, [0069], “The data collector 304 can receive other data, such as surveys (e.g. student surveys, community surveys, parents feedback), or other manually input data (e.g., spreadsheets), social media data from social networks, or other online data (e.g., from teacher or school rating websites)”; [0063], “The data collector 304 can receive student information data (e.g., name, ID, age, gender, parents' education level(s), parents' occupations, social/cultural/economic background information, academic performance, behavior, attendance, etc.), class or grade information data (e.g., class size(s), teacher-student ratio, extra curriculum activities, etc.), books' information data (e.g., books used in each subject), educational applications' information data (e.g., computer or mobile applications used in school), school facilities' information data, or a combination thereof from the student information system(s)”). Therefore, in view of Essafi, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by providing the social media data/economic background as taught by Essafi, since Essafi suggests that the analysis module can analyze the collected data (e.g., the preprocessed data stored in database) to generate a plurality of models that represent one or more education processes. The generated models can include mathematical or statistical models that simulate the impact or effect of various factors or variables in the collected data on various learning outcomes … a generated model can illustrate the inter-dependencies between factors such as student behavior, student's social, cultural or economic background, parents' educational level, family structure, school size, school resources, class size, student-teacher ratio, teacher qualifications, extra-curriculum activities, and how these factors affect learning out come in math.
Claims 3, 13 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claim 1 above, and further in view of Weldemariam et al. (US 2020/0045119 A1),
Re claims 3, 13 and 26:
Mallar does not explicitly disclose performing sentiment analysis on at least one of a text, an emoji, or a meme in the one or more social media posts to determine one or more sentiments regarding the educational challenge.
Weldemariam teaches system, methods and techniques are provided, which in various aspects may identify one or more (e.g., micro-level) mentorship or tutoring activities based on learning improvement plans or predicted a learning curve for a student or learner and post on one or more social media networks or apps (Weldemariam, Abstract). Weldemariam further teaches 3. The method of claim 1, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: monitoring social media input through the student interface, wherein the social media input comprises one or more social media posts by the student; and performing sentiment analysis on at least one of a text, an emoji, or a meme in the one or more social media posts to determine one or more sentiments regarding the educational challenge. 13. The system of claim 11, wherein the instructions further cause the system to: monitor social input through the student interface, wherein the social media input comprises one or more social media posts by the student; and perform sentiment analysis on at least one of a text, an emoji, or a meme in the one or more social media posts to determine one or more sentiments regarding the educational challenge. 26. The method of claim 19, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: monitoring social media input through a student interface, wherein the social media input comprises one or more social media posts by the student; and performing sentiment analysis on at least one of a text, an emoji, or a meme in the one or more social media posts to determine one or more sentiments regarding the educational challenge (Weldemariam, [0029], “Restructuring the profile may include analyzing the user historical data across multiple social media networks or apps (e.g., user posts, discussions, profile data which may include previous experience, job history, education history, previous mentorships, time available for mentoring, and other data sources)”; [0047], “video sharing website or platform including video posts (e.g., including education video), interactions with posts on such video sharing website or platform”; [0027], “generates one or more effective and optimal learning improvement strategies along engagement, performance, interaction and/or social activity metrics”). Therefore, in view of Weldemariam, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method / system described in Mallar, by providing the social media post / discussion as taught by Weldemariam, since a learning improvement strategy implemented may improve the user's (learner's) performance, engagement, interaction and/or social activities (Weldemariam, [0041]; [0052]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claims 2 and 12 above, and further in view of Bruckner et al. (US 2019/0147760 A1)
Re claims 6, 16:
Mallar does not explicitly disclose one or more image in the social media data. Bruckner teaches the miss features: 6. The method of claim 2, further comprising: extracting, using a second machine learning model, content from one or more images in the social media data; and adding the content to the social media data. 16. The system of claim 12, wherein the instructions further cause the system to: extract, using a second machine learning model, content from one or more images in the social media data; and add the content to the social media data (Bruckner, [0022], “The social media data 126 may include online articles shared by the user 102 on social media sites, social media posts generated by the user 102, pictures or other media shared by the user 102, or the like”). Therefore, in view of Bruckner, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Mallar, by posting picture as taught by Bruckner, since Bruckner suggests that user tends to post pictures of historical sites or locations on their social media account(s), this information can be incorporated by the machine learning model into the user's customized user profile such that content relating to historical sites or events later presented to the user
can be annotated or otherwise enhanced with pictures of the historical sites or events in order to reinforce concepts and make the content more tailored to the user's interests or preferences (Bruckner, [0022]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claim 1 above, and further in view of Smith et al. (US 2007/0218446 A1)
Re claim 7:
Mallar does not explicitly disclose instructor approval. Smith teaches a system and method for providing a tutoring service over a network comprises a tutoring application server on the network that is capable of serving one or more student interfaces and tutor interfaces over the network (Smith, Abstract). Smith teaches 7. The method of claim 1, further comprising: presenting, by the machine learning model, the educational challenge to the instructor through an instructor interface for approval before distributing the educational challenge; and updating the educational challenge based on instructor feedback (Smith, [0111], “If so, the selected content is loaded at 924 and the process returns to the start of event loop 920. If the tutor has not made a selection at 922, the process determines whether the student has requested content be loaded at 932. If so, a tutor approval subroutine 934 is run. If the tutor approves at 936, the selected content is loaded at 924 and the process returns to the event loop 920. If the tutor does not approve at 936, the process returns to the start of event loop 920”; fig. 9). Therefore, in view of Smith, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by providing instructor approval as taught by Smith, in order to allow an instructor to review and certify before presenting the content to the student
Claims 8 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claims 1 and 19 above, and further in view of Peters et al. (US 2021/0076002 A1).
Re claims 8, 27:
Mallar teaches detecting a face of the student in the image (Mallar, [0045], “the point of interest on a screen may be determined based on eye gaze detection”). Mallar does not explicitly disclose classifying an emotion exhibited by the student based on one or more features extracted from the face of the student. Peters teaches Methods, systems, and apparatus, including computer-readable media storing executable instructions, for enhanced video conference management (Peters, Abstract). Peters further teaches 8. The method of claim 1, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: capturing an image of the student solving the educational challenge with a second student in the group of two or more students; detecting a face of the student in the image; and classifying an emotion exhibited by the student based on one or more features extracted from the face of the student. 27. The method of claim 19, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: capturing an image of the student solving the educational challenge with a second student in the group of two or more students; detecting a face of the student in the image; and classifying an emotion exhibited by the student based on one or more features extracted from the face of the student (Peter, [0117]; [0126], “the emotion analysis or facial analysis encompasses systems that assign scores or assign classifications based on facial features that are indicative of emotion, e.g., position of the eyebrows, shape of the mouth, and other facial features that indicate emotion, even if emotion levels are not specifically measured or output. For example, a system can detect a smile, a brow raise, a brow furrow, a frown, etc. as indicators of emotions and need not label the resulting detection as indicating happiness, surprise, confusion, sadness, etc.”; [0131]). Therefore, in view of Peters, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Mallar, by categorizing emotions based on facial analysis as taught by Peters, since the video analysis and emotion processing can be used to determine who is paying attention or is engaged with the lesson material (Peters, [0111]).
Claims 9, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claims 1, 11 and 19 above, and further in view of Xiong et al. (US 2020/0090536 A1).
Re claims 9, 17 and 20:
Mallar teaches detecting a face of the student in the image (Mallar, [0045], “the point of interest on a screen may be determined based on eye gaze detection”). Mallar does not explicitly disclose performing sentiment analysis on the video with a third machine learning model to determine one or more sentiments regarding the educational challenge.
Xiong teaches systems and methods involving machine learning functionality in a classroom setting are described to aid a teacher in certain teaching and administrative task (Xiong, Abstract). Xiong further teaches 9. The method of claim 1, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: capturing a video of the student interacting with the educational challenge; and performing sentiment analysis on the video with a third machine learning model to determine one or more sentiments regarding the educational challenge. 17. The system of claim 11, wherein the instructions further cause the system to: capture a video of the student interacting with the educational challenge; and perform sentiment analysis on the video with a third machine learning model to determine one or more sentiments associated with the interaction. 20. The method of claim 19, wherein collecting feedback regarding the educational challenge from engagement with the educational challenge by the student comprises: recording a video of the student interacting with the challenge; and performing sentiment analysis on the video to determine one or more sentiments regarding the educational challenge (Xiong, Abstract, “moderator device used by a teacher … The moderator device may include an image/audio input device and may execute machine learning engines running machine learning models that generate results indicative of student behavior, student comprehension and the appropriateness of media content. The teacher, using the moderator device, may provide feedback regarding the results. The member device may generate more than one type of result and one result may provide feedback with respect to the other result. Feedback may be used to train the machine learning model and generate improved models”; [0056], “machine learning model 55 may include a machine learning model that processes both image/audio data and content data to generate a result based on both types of data. For example, that a student is viewing inappropriate content could be one factor that may be considered along with other facial and behavior cues in determining whether a student is paying attention”). Therefore, in view of Xiong, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Mallar, by providing visual analysis on the student as taught by Xiong, in order to generate results indicative of student behavior, student comprehension and the appropriateness of media content viewed by the student (Xiong, Abstract).
Re claim 18:
18. The system of claim 17, wherein the instructions further cause the processor to report the one or more sentiments associated with the interaction to the instructor (Mallar, [0056], “the learner metric may correspond to a learner engagement score generated in real-time during the live learning session”; Xiong, Abstract; [0056]).
Claims 21 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar, Essafi and Weldemariam as applied to claim 3 above, and further in view of Ahuja et al. (US 2017/0004720 A1)
Re claim 21:
Mallar teaches 21. The method of claim 3, further comprising: correlating the one or more sentiments with engagement states spanning from an initial review of the educational challenge through a submission of a response to the educational challenge (Mallar, [0064], “The group of instructor assistants 24 may facilitate individual learner interactions either during the live learning session (e.g., responding to in-session messages, reviewing in-session assessments, etc.)”; fig. 7, 308 - “THE LEARNER ENGAGEMENT DATA”; [0038], “The data module 210 is configured to access in-session data, post-session data, class data, and learner engagement data for the plurality of learners 12”; [0056], “The term "learner engagement data" as used herein refers to a learner metric that measures, in real-time, the engagement level of each learner of the plurality of learners attending the live learning session”). Mallar does not explicitly disclose outputting the one or more sentiments and the engagement states to the instructor, thereby allowing the instructor to determine if a topic or a concept needs further exploration. Ahuja teaches a system may an online education platform configured to provide an online course over a network to a plurality of computing devices (Ahuja, Abstract). Ahuja teaches 21. The method of claim 3, further comprising: correlating the one or more sentiments with engagement states spanning from an initial review of the educational challenge through a submission of a response to the educational challenge (Ahuja, Abstract, “The online education platform may include a content editor may provide an authoring tool on a computing device associated with an instructor of the online course. The authoring tool may develop or change the education content associated with the online course”; [0085], “the interface of the authoring tool 170 provides a grading metric 160 that indicates the passing threshold for the online course 104”; [0088], “the analytic results 164 may include the percentage of learners having completed the lecture 180”; “the authoring tool” allows an instructor to curriculum and curriculum outcome); and outputting the one or more sentiments and the engagement states to the instructor, thereby allowing the instructor to determine if a topic or a concept needs further exploration (Ahuja, [0039]; [0043], “the online course analyzer 110 may be configured to determine the engagement metric(s) 158 based on the learners' tracked interactions or engagements as collected by the learner tracking unit 118”; [0046], “while viewing the engagement metrics 158 and/or the grading metrics 160, the instructor may institute a change to the education content 136, and the change may be carried out by the content editor 134 which converts the education content 136 having a first format”). Therefore, in view of Ahuja, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method /system described in Mallar, by making changes to education content in response to engagement metric as taught by Ahuja, since the instructor uses his/her computing device to connect to the online education platform to view the learners' performance and engagement via the instructor dashboard and can make adjustments to the education content via an authoring tool that interacts with a content editor which makes any instructor edits compatible with a format of the online education platform to increase performance and engagement (Ahuja, [0039]).
Claims 23 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar, Essafi and Peters, as applied to claim 8 above, and further in view of Wu et al. (US 2025/0005923 A1).
Re claim 23:
Mallar does not explicitly disclose surprise or intrigued by the second student. Wu et al. (US 2025/0005923 A1) teaches a team monitoring system receives data for determining user situational awareness and/or surprise for each team member (Wu, Abstract). Wu teaches 23. The method of claim 8, wherein the emotion exhibited by the student is surprised or intrigued by input provided by the second student, and further comprising: updating the machine learning model in response to determining the student is surprised or intrigued by the input provided by the second student while the student is solving the educational challenge with the second student (Wu, [0022]; [0023], “the situational awareness and/or surprise of the first team member may be partially based on the first team member's response to the actions of the second team member”; [0021]; [0030], “Likewise, the system may receive data related to factors specific to the task. Such task specific data provides the additional metric of context when assessing 204, 206 user situational awareness and/or surprise. Such analysis may include processing via machine learning, neural network algorithms”; [0027]; [0032], “a neural network”). Therefore, in view of Wu, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by assessing the mental state of the user based on the action of team as taught by Wu, in order to assess the emotional state of each member in the group so that the group can work more efficiently (Wu, Abstract, “A team metric of situational awareness and/or surprise is determined for the entire team based on individual user situational awareness and/or surprise correlated to discreet portions of a task”).
Claims 25 are rejected under 35 U.S.C. 103 as being unpatentable over Mallar and Essafi as applied to claim 1 above, and further in view of Ahuja et al. (US 2017/0004720 A1) and Wu et al. (US 2025/0005923 A1).
Re claim 25:
Mallar does not explicitly disclose outputting a summary of the student’s achievement; nor disclose engagement states as surprise. The combination of Ahuja and Wu teaches 25. The method of claim 1, further comprising: in response to presenting the educational challenge, outputting a summary of what the student learned in relation to the educational challenge, the summary including a list of engagements of the student that have been classified by the machine learning model (Ahuja, figs. 13 – 22) as surprise (Wu, [0022]; [0023], “the situational awareness and/or surprise of the first team member may be partially based on the first team member's response to the actions of the second team member”; [0021]). Therefore, in view of Ahuja, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by providing a summary (i.e., instructor dashboard) as taught by Ahuja, in order the instructor dashboard 154 may provide the instructor with the most relevant and interesting data regarding the online class's performance so that the instructor can make determinations on how well the learners are understanding the education content, and determine whether his/her online course could benefit from any adjustments (Ahuja, [0038]). In view of Wu, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Mallar, by assessing the surprise mental as taught by Wu, in order to assess the emotional state of each member in the group so that the group can work more efficiently (Wu, Abstract, “A team metric of situational awareness and/or surprise is determined for the entire team based on individual user situational awareness and/or surprise correlated to discreet portions of a task”).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3,6-9,11-13,16-21,23 and 25-27 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant argues "[c]oncepts that cannot practically be performed in the human mind ... are not 'mental processes.'" (Id.) The "training ... ," "generating ... ," "presenting ... ," "collecting ... ," and "fine-tuning ... " limitations recite operations that can only be reasonably completed by a computer and are specifically tailored to a computing environment. Thus, the above limitations do not recite a mental process”.
The Office maintains the rejections under 35 U.S.C. 101. Turning to applicant’s specification, See Applicant’s Specification, pg. 3, [0015] (“A generative machine learning model can be trained on student profiles and instructor input (e.g., curriculum) to automatically generate challenges (e.g., problems, activities, assessments) tailored to each student”). At best, the machine learning algorithm is content selection algorithm that monitors (learns) sentiment and engagement state from a user and select learning content. Applicant does not point out precisely how a procedure (algorithm) for monitoring student states and generating a curriculum cannot be mentally perform by a human. The fact that monitoring and selecting content can be performed more quickly or efficiently via a computer does not materially alter the patent eligibility of Applicant’s claims. In other words, mere automation of an abstract idea, without improving a technical aspect, does not confer patent eligibility.
Applicant argues the USPTO specifically noted that "the claim does not recite a mental process because the steps are not practically performed in the human mind." (Id.) Applicant contends the "training ... " and "fine-tuning ... " limitations are akin to the "training the neural network ... " limitations in Example 39.”
The claim of Example 39 is found eligible because the claim uses mathematical transformation functions on an acquired set of facial images. See Example 39. These transformations can include affine transformations, for example, rotating, shifting, mirroring, or filtering transformations, such as smoothing or contrast reduction. The neural networks are then trained with this expanded training set using stochastic learning with backpropagation, a machine learning algorithm that uses the gradient of a mathematical loss function to adjust the weights of the network. Id. In other words, the claim is transforming the data to train and improve the neural network to detect the faces better. The examiner disagrees that Applicant’s claimed invention is similar to the referenced claim of Example 39, and Applicant’s does not provide any supporting evidence other than conclusory statements. The claimed invention of grouping students based student’s profiles, generating educational content to the group and updating the models based on the feedback pertains to an abstract idea received and processed by a generic processing unit.
Applicant argues additional elements identified at Step 2A Prong One, in combination with the alleged abstract idea(s), provide a solution to a technological problem encountered by conventional educational platforms.
The Examiner submits that claims 1 and 19 do not require the claimed method and system to be performed on a physical machine. For example, the steps: training a machine learning model ... selecting a group of two or more students … generating, using the machine learning model challenge … presenting, by the machine learning model, the educational challenge … in response to presenting the educational challenge … fine-tuning the machine learning model … There is no machine positively recited in the claims. At best, A machine learning model is a mathematical model capable of being updated and fine tuning of its’ parameters.
Applicant argues the specification also discloses that "the flow of information is typically one-directional from an instructor to students, and student feedback is typically provided out-of-band by electronic mail." (Id.) Finally, the specification teaches the content generated by conventional educational platforms are "often generic and unrelatable as it is designed for substantially anyone." (Id.) The independent claims recite a solution that addresses the above technological problem by "training a machine learning model to generate educational challenges based on profiles of a plurality of students at an educational institution …
The examiner would like to the use of the computer to automate a mental process has been held to be patently-ineligible if it does not provide any technical advance or improvement. In our particular case, the use of the computer in claims 1-3,6-9,11-13,16-21,23 and 25-27 are meant to automate the mathematical calculation and adjusting / updating parameters with a mathematical model (i.e. a machine learning model) and does not provide any technical advance or improvement to the functioning of the computer.
Applicant argues A claim may be deemed patent-eligible if it "adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present." (Revised Guidance, 56.) … Applicant asserts that the "training ... ," "generating ... ," "presenting ... ," "collecting ... ," and "fine-tuning ... " limitations recite additional elements that do not constitute well-understood, routine, or conventional activity. The Office Action does not provide a statement, court decision, or publication indicating that the limitations are widely prevalent or in common use. (Office Action, p. 5.)
The examiner submits that “a student interface”, “an instructor interface”, “at least one processor", “at least one memory” are recited at a high level of generality and recited as performing generic computer functions routinely used in computer applications; understood to be a generic computer device with processors and memory, respectively. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACK YIP/Primary Examiner, Art Unit 3715