DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered. claims 1 – 12, 14 – 15 and 20 – 25 are pending; claims 13 and 16 - 19 have been cancelled.
Claim Objections
Claim 25 is objected to because of the following informalities: the third paragraph in claim 25 includes the phrase “the the second voice” believed to be “the second voice”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3,6-7,11-12,14-15,20-22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Naufel (US 2024/0379019 A1) in view of Aslan et al. (US 2015/0099255 A1).
Re claims 1, 14, 20:
Naufel teaches 1. A method (Naufel, Abstract) comprising:
receiving a request to provide individualized instruction to a user on a particular subject (Naufel, [0586]);
receiving a plurality of content about the particular subject (Naufel, [0619], “generating generative AI materials for each content delivery type, enhancing the diversity and novelty of learning content”);
assigning different labels to different content from the plurality of content based on a classification of the different content (Naufel, [0464], “Metadata for various media types”; [0302], “Tag Generation: Media processing and tagging module 225 automatically generates a set of relevant tags based on the analysis of the media files. Media processing and tagging module 225 ensures that the generated tags accurately represent the key themes, topics, and concepts of such content to facilitate effective indexing and categorization”; [0303], “Content Indexing and Categorization: Media processing and tagging module 225 integrates the generated tags with graph database 230 of learning platform 200 to index and categorize the media files. Media processing and tagging module 225 enables educational content to be organized in a manner that makes it easy for users to search and access the information they need”; [0304] – [0306]);
generating a first set of customized content associated with a first part of the particular subject from a generative artificial intelligence (Al) that selects and customizes a first set of content from the plurality of content based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user, wherein generating the first set of customized content comprises generating new materials by modifying the first set of content (Naufel, [0022], “AI interfaces 701, 702, and 703 which output personalized, AI generated content to a student learner”; [0261], “LLM 210 is responsible for generating the full structure of learning pathways down to the learning unit level for educational content packages”; [0380], “Content Generation 283 and AI Content Production Module”; [0380] – [0388]; [0388], “content generation 283 and AI content production module ensures that learning platform 200 consistently delivers high-quality, engaging, and personalized educational materials across all content delivery formats, catering to diverse learning styles and preferences”; [0674]);
presenting the first set of customized content through a virtual presenter by animating a first virtual appearance and by generating audio in a first voice according to the first set of customized content and the custom learning preferences (Naufel, [0454], “visual conversational AI (e.g., an expressive AI powered avatar) that leverages multiple factors, such as color, size, sound, and shape, to express its emotional state and facilitate meaningful engagements with learners”; [0542], “Alice can access various content formats such as text, audio, video, and interactive content (see FIG. 6). An embodied, visual conversational AI (e.g., an expressive AI powered avatar) assists Alice throughout her learning journey”; [0383], “Audio Generation: Content generation 283 module may utilize AI to generate audio materials such as podcasts, audiobooks, or narrated presentations by analyzing the learning objectives, creating scripts, and converting them into natural-sounding speech using text-to-speech technology”);
tracking a real-time engagement that the user has with each content from the first set of customized content using a set of sensors (Naufel, [0429], “Emotion Detection: Emotional intelligence module 292 uses advanced emotion recognition technology, such as natural language processing and facial expression analysis, to detect user emotions and sentiments during their learning journey. This allows learning platform 200 to better understand emotional states of the learners and adjust the learning experience appropriately.”; [0288], “Real-time Performance Analysis”; [0400], “Real-time Feedback: As learners engage with learning platform 200, interactive AI tutor 270 delivers immediate and constructive feedback on their performance, helping them to identify areas for improvement and adjust their approach accordingly”; [0247], “Comprehensive Personalization: Learning platform 200 may incorporate adaptive learning algorithms that continuously analyze the performance, progress, and preferences of individual learners and responsively adjust the learning experience delivered by learning platform 200 to such individual learners in real-time. For instance, learning platform 200 may optimize both the pace and content for each individual learner. Moreover, conversational AI engagement architecture 205 and interactive AI tutor 270 enable learning platform 200 to deliver personalized guidance, feedback, and mentorship to individual learners throughout the learning process, by adapting responses and support provided by conversational AI engagement architecture 205 and interactive Al tutor 270 modules based on unique needs and goals of individual learners”; [0427]; [0707], “adaptive learning algorithm module 275 of learning platform 200 continuously analyzes performance, preferences, and progress of individual learners to adjust the learning experience in real-time, optimizing both the pace and content for each user, allowing for more dynamic personalization of content and learning pathways”; [0284], “Adaptive learning algorithm module 275 provides adaptive learning algorithms which may continuously analyze performance, preferences, and progress of individual learners through various data points, such as assessment scores, time spent on learning units, engagement patterns, and feedback”);
determining one or more changes to the custom learning preferences in response to tracking in real-time engagement that the user has with a subset of content in the first set of customized content of a first type, a first format, or a first presentation (Naufel, [0022], “AI interfaces 701, 702, and 703 which output personalized, AI generated content to a student learner”; [0261], “LLM 210 is responsible for generating the full structure of learning pathways down to the learning unit level for educational content packages”; [0380], “Content Generation 283 and AI Content Production Module”; [0380] – [0388]; [0388], “content generation 283 and AI content production module ensures that learning platform 200 consistently delivers high-quality, engaging, and personalized educational materials across all content delivery formats, catering to diverse learning styles and preferences”; [0674]; [0476], “i. Adapts content based on needs, preferences, and progress of individual learners”; [0611], “implementing an adaptive and scalable AI-driven personalized learning platform. Such an example includes: a scalable graph database that stores the scope and sequence of learning pathways; a language model enabled by AI large language model 210 that generates the full structure of learning pathways down to the learning unit level; adaptive learning algorithm module 275 that dynamically personalizes content and learning pathways based on individual learners' performance, preferences, and progress; collaborative learning module 295 that fosters engagement and engagement among learners through group discussions, project-based activities, and peer review; learning analytics module 240 that provides insights into learners' progress, engagement, and performance; feedback system 232 uses AI large language model 210 APIs to vote on the relevance of proposed changes to content based on user feedback”);
customizing a second set of customized content associated with a second part of the particular subject to differ from the first set of customized content in real-time while presenting the first set of customized content in response to the one or more changes to the custom learning preferences resulting from the real-time engagement, wherein customizing the second set of customized content comprises selecting a second set of content from the plurality of content with one or more labels that are associated with the one or more changes to the custom learning preferences and that correspond to content of the first type, the first format, or the first presentation (Naufel, [0022]; [0261]; [0380], “Content Generation 283 and AI Content Production Module”; [0380] – [0388]; [0674]; [0288], “adaptive learning algorithm module 275, and other components of learning platform 200 which collect relevant data. Real-time or near real-time analysis enables learning platform 200 to identify areas where learners may need additional support or resource”; [0476], “i. Adapts content based on needs, preferences, and progress of individual learners”; [0611], “adaptive learning algorithm module 275 that dynamically personalizes content and learning pathways based on individual learners' performance, preferences, and progress”; [0707], “adaptive learning algorithm module 275 of learning platform 200 continuously analyzes performance, preferences, and progress of individual learners to adjust the learning experience in real-time, optimizing both the pace and content for each user, allowing for more dynamic personalization of content and learning pathways”; Naufel teaches a dynamic/adaptive learning algorithm module dynamically personalizes content and learning pathways based on individual learners' performance, preferences, and progress in real time; dynamic personalizes content – first/second set of customized content ); and
presenting the second set of customized content to the user through the virtual presenter (Naufel, [0055], “User interface 110 may also include one or more output devices, such as a display screen of a computing device or a touch-sensitive display, including a touch-sensitive display of a mobile computing device. One or more output devices, in some examples, may be configured to provide output to a user using tactile, audio, or video stimuli …”).
Naufel does not explicitly disclose a positive engagement; nor real-time positive engagement; nor disclose tracking the real-time engagement comprises analyzing one or more images, sound, and biomechanical feedback of the user as captured by the set of sensors as each content is presented; instead Naufel teaches a dynamic/adaptive learning based on feedback (engagement) such as individual learners' performance, preferences, and progress in real time.
Aslan teaches computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment (Aslan, Abstract). Aslan teaches:
tracking a real-time engagement that the user has with each content from the first set of customized content using a set of sensors, wherein tracking the real-time engagement comprises analyzing one or more images, sound, and biomechanical feedback of the user as captured by the set of sensors as each content is presented (Aslan, [0021], “one or more sensors”; fig. 2; fig. 5; Abstract, “dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined”; [0043], “the process may proceed to block 252 where the instructional content type may be changed in an effort to increase the user's level of engagement”; Table 1; [0002], “real-time determination of engagement levels”);
determining one or more changes to the custom learning preferences in response to tracking in real-time positive engagement that the user has with a subset of content in the first set of customized content of a first type, a first format, or a first presentation (Aslan, fig. 2, 252, 250; [0043], “The change in instructional content type may be to an instructional content type defined in the user profile of the respective user or may be determined based on the evolving user state model”; [0044], “If a user's level of engagement drops below the previously described threshold, then adaptation module 124 may cooperate with instruction module 128 to dynamically adapt the instructional content and change the content type from a current content type”; fig. 5; [0014], “The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. For example, the instruction module may present instructional content to the user in the form of a multimedia presentation … the adaptation module may monitor an engagement level of the user. If the adaptation module determines that the user's engagement level is decreasing, the adaptation module may cooperate with the instruction module to adapt the instructional content presentation to an interactive presentation, such as a game, in an effort to increase the engagement level of the use”) …
presenting the first set of customized content and tracking the real time engagement based on in response to the one or more changes to the custom learning preferences resulting from the real-time positive engagement … (Aslan, fig. 2, 252 - “engagement level above threshold?”; [0043], “If the engagement level is below the threshold, the process may proceed to block 252 where the instructional content type may be changed in an effort to increase the user's level of engagement”; [0027], “adapt the instructional content by notifying instruction module 128 of the decrease in the user's engagement level and/or utilizing the programmable parameters to cause instruction module 128 to adapt the instructional content”)
customizing a second set of customized content associated with a second part of the particular subject to differ from the first set of customized content in real-time while presenting the first set of customized content in response to the one or more changes to the custom learning preferences resulting from the real-time positive engagement (Aslan, fig. 2, 252, 250; [0043], “The change in instructional content type may be to an instructional content type defined in the user profile of the respective user or may be determined based on the evolving user state model”; [0044], “If a user's level of engagement drops below the previously described threshold, then adaptation module 124 may cooperate with instruction module 128 to dynamically adapt the instructional content and change the content type from a current content type”; fig. 5).
Therefore, in view of Aslan, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform/computer program described in Naufel, by providing the real-time positive engagement and relative sensors as taught by Aslan, since Aslan suggests the adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined (Aslan, [0014]). Once the user's engagement level decreases below a predefined threshold, the adaptation module may be configured to cooperate with instruction module to adapt the instructional content by notifying instruction module 128 of the decrease in the user's engagement level and/or utilizing the programmable parameters to cause instruction module to adapt the instructional content (Aslan, [0027]).
Therefore, in view of Aslan, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform/computer program described in Naufel, by providing physiological sensor for monitoring engagement level as taught by Aslan, in order to detect different type of engagement such as measurable external indicators may be associated with a plurality of types of engagement such as behavioral engagement, cognitive engagement, and/or emotional engagement. As used herein, behavioral engagement may correspond to effort, persistence, attention, and/or participation; cognitive engagement may correspond with a commitment by the user to understand the instructional content or any other form of psychological investment in the instructional content; and emotional engagement may correspond with the feelings of the user with respect to the learning process, such as fun, excitement, and/or enjoyment (Aslan, [0031]).
14. An education platform comprising:
a system with one or more hardware processors configured to:
receive a request to provide individualized instruction to a user on a particular subject;
receive a plurality of content about the particular subject;
assign different labels to different content from the plurality of content based on a classification of the different content;
generate a first set of customized content associated with a first part of the particular subject from a generative artificial intelligence (AI) that selects and customizes a first set of content from the plurality of content based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user, wherein generating the first set of customized content comprises generating new materials by modifying the first set of content;
present the first set of customized content through a virtual presenter by animating a first virtual appearance and by generating audio in a first voice according to the first set of customized content and the custom learning preferences;
track a real-time engagement that the user has with each content from the first set of customized content using a set of sensors, wherein tracking the real-time engagement comprises analyzing one or more images, sound, and biomechanical feedback of the user as captured by the set of sensors as each content is presented;
determine one or more changes to the custom learning preferences in response to tracking in real-time positive engagement that the user has with a subset of content in the first set of customized content of a first type, a first format, or a first presentation;
customize a second set of customized content associated with a second part of the particular subject to differ from the first set of customized content in real-time while presenting the first set of customized content and tracking the real-time engagement based on the one or more changes to the custom learning preferences, wherein customizing the second set of customized content comprises selecting a second set of content from the plurality of content with one or more labels that are associated with the one or more changes to the custom learning preferences and that correspond to content of the first type, the first format, or the first presentation; and
present the second set of customized content to the user through the virtual presenter (See claim 1 above for citations and motivations).
20. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors of an education platform, cause the education platform to perform operations comprising:
receiving a request to provide individualized instruction to a user on a particular subject;
receiving a plurality of content about the particular subject;
assigning different labels to different content from the plurality of content based on a classification of the different content;
generating a first set of customized content associated with a first part of the particular subject from a generative artificial intelligence (AI) that selects and customizes a first set of content from the plurality of content based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user, wherein generating the first set of customized content comprises generating new materials by modifying the first set of content;
presenting the first set of customized content through a virtual presenter by animating a first virtual appearance and by generating audio in a first voice according to the first set of customized content and the custom learning preferences;
tracking a real-time engagement that the user has with each content from the first set of customized content using a set of sensors, wherein tracking the real-time engagement comprises analyzing one or more images, sound, and biomechanical feedback of the user as captured by the set of sensors as each content is presented;
determining one or more changes to the custom learning preferences in response to tracking in real-time positive engagement that the user has with a subset of content in the first set of customized content of a first type, a first format, or a first presentation;
customizing a second set of customized content associated with a second part of the particular subject to differ from the first set of customized content in real-time while presenting the first set of customized content in response to the one or more changes to the custom learning preferences resulting from the real-time positive engagement, wherein customizing the second set of customized content comprises selecting a second set of content from the plurality of content with one or more labels that are associated with the one or more changes to the custom learning preferences and that correspond to content of the first type, the first format, or the first presentation; and
presenting the second set of customized content to the user through the virtual presenter (See claim 1 above for citations and motivations).
Re claim 2:
2. The method of claim 1, wherein tracking the real-time engagement comprises: analyzing facial expressions and voice sentiment when the user is presented with each content from the first set of customized content; and classifying the engagement with each content of the first set of customized content based on the facial expressions and the voice sentiment (Aslan, fig. 1, 110; [0021], “facial motion capture 112, eye tracking 114, speech recognition 116, and/or gesture/posture”; [0033], “Head pose and facial expression algorithms may be utilized and may be based, for example, on the data acquired by the 2D camera. Eye gaze and region of focus algorithms may be utilized and may be based, for example, on eye tracking hardware. Arousal and excitement data may be utilized and may be based, for example, on pupil dilation data, skin conductance data, and/or heart rate data”).
Re claim 3:
3. The method of claim 1, wherein customizing the second set of customized content comprises: presenting a different ratio of images and videos with the second set of customized content than with the first set of customized content (Aslan, fig. 2, 212 - “Multimedia (e.g., video)”; 210 – “Linear (e.g., text)”; [0044], “dynamically adapt the instructional content and change the content type from a current content type (e.g., linear content 210) to another content type (e.g., multimedia content 212)”; video content has more than text content; ).
Re claim 6:
6. The method of claim 1, wherein customizing the second set of customized content comprises:
changing between a first format of evaluating user understanding of the particular subject in the first set of customized content to a second format of evaluating user understanding of the particular subject in the second set of customized content in response to the custom learning preferences indicating a negative engagement with the first format (Aslan, fig. 2, 252, 250; [0043], “The change in instructional content type may be to an instructional content type defined in the user profile of the respective user or may be determined based on the evolving user state model”; [0044], “If a user's level of engagement drops below the previously described threshold, then adaptation module 124 may cooperate with instruction module 128 to dynamically adapt the instructional content and change the content type from a current content type”; fig. 5).
Re claim 7:
7. The method of claim 1, wherein customizing the second set of content customized comprises: dynamically generating the second set of customized content by changing one or more of a type, format, or presentation style of the second set of content (Naufel, [0287], “adaptive learning algorithm module 275 may modify the difficulty level, format, or focus of the content to better suit current skill levels and learning styles of individual learners”; [0679]; Aslan, Abstract, “plurality of instructional content types”; fig. 2, 242 - “Initial State?”; [0029], “an initial state may be a state where an initial instructional content type”; Abstract, “dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined”; [0043], “the process may proceed to block 252 where the instructional content type may be changed in an effort to increase the user's level of engagement”; [0027], “adapt the instructional content by notifying instruction module 128 of the decrease in the user's engagement level and/or utilizing the programmable parameters to cause instruction module 128 to adapt the instructional content”);
.
Re claim 11:
11. The method of claim 1 further comprising: receiving a request to provide individualized instruction to a second user on the particular subject, wherein the second user is different than the user; and presenting a third set of customized content associated with the first part of the particular subject to the second user based on the second user being associated with different learning preferences than the user (Naufel, [0010], “dynamically adapting to the needs of individual learners, to the benefit of such individual learners within an educational context”; Aslan, [0051], “enable the administrator to access more in-depth instructional information associated with the individual learners”; [0153], “means for receiving an indicator that one or more students is in need of tutoring”).
Re claim 12:
12. The method of claim 1 further comprising: retrieving a personalized learning model of the user that stores the custom learning preferences in response to the request; and modifying the personalized learning model to include the one or more changes to the custom learning preferences (Naufel, [0012], “processing circuitry may return as output to the new student learner, the learning unit contextualized by the large language model”; [0075], “large language model (“LLM”) 210 may be trained to detect and write queries based on any input obtained in relation to user 300”; Aslan, Abstract, “dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined”; [0043], “the process may proceed to block 252 where the instructional content type may be changed in an effort to increase the user's level of engagement”; [0027], “adapt the instructional content by notifying instruction module 128 of the decrease in the user's engagement level and/or utilizing the programmable parameters to cause instruction module 128 to adapt the instructional content”).
Re claim 15:
15. The education platform of claim 14 further comprising: a user device that is communicably coupled to the system, the user device comprising: a speaker; a display; and the set of sensors (Naufel, fig. 14; [0562]; Aslan, [0044]; [0055]; [0024]).
Re claim 21:
21. The method of claim 1 further comprising: receiving the biomechanical feedback from one or more wearable devices worn by the user; and determining the positive engagement that the user has with the subset of content based on a first set of biomechanical feedback received during a presentation of the subset of content, and negative engagement that the user has with other content from the first set of customized content based on a different second set of biomechanical feedback received during a presentation of the other content (Aslan, [0021]; [0033]).
Re claim 22:
22. The method of claim 21, wherein receiving the biomechanical feedback comprises:
tracking one or more of a heart rate, blood pressure, and oxygen level of the user when presented with different content from the first set of customized content (Aslan, [0033]).
Re claim 24:
24. The method of claim 1 further comprising: monitoring environment conditions around the user during said presenting of the first set of customized content, wherein monitoring the environment conditions comprises measuring one or more of lighting conditions, nosiness, and temperature of an environment in which the user is located (Aslan, [0021], “one or more sensors”; fig. 2; fig. 5; Abstract, “dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined”; [0043], “the process may proceed to block 252 where the instructional content type may be changed in an effort to increase the user's level of engagement”); wherein tracking the real-time engagement comprises: defining a first set of the custom learning preferences to correspond to one or more of a first lighting condition, a first amount of noise, or a first temperature that is measured during the positive engagement that the user has with the subset of content (Aslan, [0004], “contextual variables affecting learners (e.g., environmental conditions-such as weather, light, and noise)”; [0087]; [0111]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Hickman (US 2023/0169268 A1)
Re claim 4:
Naufel does not explicitly disclose substituting words based on complexity. Van Hickman teaches an invention that helps a student learn how to read by providing text at a reading level suitable for learning (Van Hickman, Abstract). Van Hickman teaches 4. The method of claim 1, wherein customizing the second set of customized content comprises: modifying a complexity of words used to explain the particular subject with the second set of customized content relative to a complexity of words used to explain the particular subject with the first set of customized content (Van Hickman, [0031]; [0117], “The complexity level is used to select words to substitute with words in the input text 1212”; [0118], “Once the candidate words are selected based on the assigned complexity level, a candidate replacement text is generated by substituting one or more target words in the textual content with one or more candidate replacement word”). Therefore, in view of Van Hickman, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform described in Naufel, by substituting words as taught by Van Hickman, in order to provide candidate words appropriate to the reading level of the student (Van Hickman, pg. 15, claim 3).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Allon et al. (US 2016/0180731 A1)
Re claim 5:
Naufel does not explicitly disclose including a number of examples.
Allon teaches a system and method for predicting student engagement respective of a learning artifact including at least one question (Allon, Abstract). Allon teaches 5. The method of claim 1, wherein customizing the second set of customized content comprises: increasing a number of examples that are provided with the second set of customized content than with the first set of customized content based on the custom learning preferences indicating the positive engagement with examples (Allon, [0038], “a recommendation may be to provide additional textual information explaining the material to increase student engagement”). Therefore, in view of Allon, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform described in Naufel, by providing the additional learning material as taught by Allon, since recommendation may be to provide additional textual information explaining the material to increase student engagement … the recommendation is to provide increased textual information, textual information explaining a particular aspect of the question may be automatically retrieved and sent to devices utilized by students (Allon, [0038]).
Claims 8 - 9 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Publicover et al. (US 2021/0390876 A1).
Re claims 8 - 9:
Naufel does not explicitly disclose animated human form. Publicover teaches systems and methods are described to enact machine-based, simultaneous classification of emotional and cognitive states of an individual in substantially real time (Publicover, Abstract). Publicover teaches 8. The method of claim 1,
generating the virtual presenter with the first visual appearance and the first voice that mirror a deepfake clone of a first person that is a subject of one or more of the first set of customized content; and changing the first virtual appearance and the first voice by generating a different deepfake clone of a second person that is a subject of one or more of the second set of customized content (Publicover, [0098], “The new knowledge may be presented to the young girl 60a via audiovisual exchanges with the grandparent 61a, a cartoon-like character 64a on a display 62a, or a combination of such human and machine-based interaction modes”). 9. The method of claim 8, wherein presenting the first set of customized content comprises presenting the one or more of the first set of content through the deepfake clone of the first person; and wherein presenting the second set of customized content comprises presenting the one or more of the second set of content through the different deepfake clone of the second person (Publicover, [0079], “methods of delivery (e.g., podcast, audiovisual, text, drawing”; [0075]; [0336]). Therefore, in view of Publicover, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform described in Naufel, by providing human interaction entities (HIEs) as taught Publicover, since AI components may additionally express one or more AI "personalities" (AIPs), "characters," or "companions" to implement familiar, socially acceptable and more effective communication experiences. The HIE may manage, identify and/or perform communication experiences that maintain emotional engagement, assess knowledge understanding by a learner, and enhance methods for teaching (Publicover, [0015]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Bedor et al. (US 2020/0051460 A1).
Re claim 10:
Naufel does not explicitly disclose a textbook. Bedor et al. (US 2020/0051460 A1) teaches an educational game (and learning management system and methods pertaining to the same) can be configured for the effective teaching of advanced educational curriculum (Bedor, Abstract). Bedor further teaches 10. The method of claim 1 further comprising:
extracting a plurality of source content on the particular subject from a digital copy of an approved textbook; supplementing the plurality of source content with supplemental content from a plurality of external sources, wherein the supplemental content comprises images, videos, and additional text for topics referenced in the plurality of source content; wherein presenting the first set of customized content comprises presenting a first amount of content from the plurality of source content and the supplemental content to the user (Bedor, [0095], “tutoring modules 158 also look for opportunities to interject secondary or supplementary content”; [0181], “The story supplements 528 are short in-game dialog elements and cues that build-upon or reiterate aspects of the educational curriculum 302”); and wherein customizing the second set of customized content comprises selecting a second amount of content from the plurality of source content and the supplemental content, wherein the second amount of customized content includes less content from the plurality of source content and more content from supplemental content than the first set of customized content (Bedor, [0015], “Most of the available educational materials are either mere digitization of traditional textbooks (per step 6, above) or virtual simulators”; [0118], “The educational curriculum for a given subject includes: course syllabi; textbooks; news articles; classroom presentations; laboratory experiments; simulations; videos and mixed-media presentations of the focal subject matter (e.g., videos from corporate and academic STEM professionals and experts); tests, quizzes, and end of the year exams ( e.g., Common Core, SAT and AP exams)”).
Therefore, in view of Bedor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/platform described in Naufel, by providing source such as textbook as taught by Bedor, since textbook has been known to provide reliable and proven source of material for a student.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Ryan (US 2023/0042641 A1).
Re claim 23:
Naufel does not explicitly disclose 23. The method of claim 1, wherein tracking the real-time engagement comprises: measuring a time it takes for the user to respond to different prompts included during said presenting of the first set of customized content; and determining a positive engagement based on the time it takes for the user to respond to a particular prompt being less than an average response time of the user. Ryan teaches an invention is that of creatively assisting learning, such as reading and learning a language (Ryan, Abstract). Ryan teaches the missing feature (Ryan, [0088]). Therefore, in view of Ryan, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Naufel, by providing the task duration as taught by Ryan, in order to measure the engage of a child to a reading task (Ryan, [0088]).
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Naufel and Aslan as applied to claim 1 above, and further in view of Sha et al. (US 2023/0237922 A1).
Re claim 25:
Naufel teaches expressive AI powered avatar, but does not explicitly disclose changing, for presentation of the second set of customized content, the first virtual appearance of the virtual presenter to a second virtual appearance and the first voice of the virtual presenter to a second voice based on the one or more changes to the custom learning preferences. Naufel teaches 25. The method of claim 1 further comprising: generating, with the generative AI, a virtual presenter with a first visual appearance and with a first voice that appeal to and that are based on the custom learning preferences of the user for presentation of the first set of customized content (Naufel, [0454], “AI: Learning platform 200 may utilize an embodied, visual conversational AI (e.g., an expressive AI powered avatar) that leverages multiple factors, such as color, size, sound, and shape, to express its emotional state and facilitate meaningful engagements with learners”; [0542], “visual conversational AI (e.g., an expressive AI powered avatar) assists Alice throughout her learning journey, leveraging multiple factors such as color, size, sound, and shape to express an emotional state and engage with her effectively”; [0382], “Video Generation: Content generation 283 module may utilize AI algorithms to analyze the learning objectives and automatically generate video scripts, storyboards, and animations, ensuring that the content visually explains concepts and ideas to learners in an engaging manner”; [0383], “Audio Generation: Content generation 283 module may utilize AI to generate audio materials such as podcasts, audiobooks, or narrated presentations by analyzing the learning objectives, creating scripts, and converting them into natural-sounding speech using text-to-speech technology”; [0384]);
Sha teaches methods, apparatus, and processor-readable storage media for artificial intelligence-driven avatar-based personalized learning techniques (Sha, Abstract). Sha teaches
changing, for presentation of the second set of customized content, the first virtual appearance of the virtual presenter to a second virtual appearance and the first voice of the virtual presenter to a second voice based on the one or more changes to the custom learning preferences; and wherein presenting the second set of customized content comprising presenting the second set of customized content with the virtual presenter having the second virtual appearance and the the second voice (Sha, [0048], “an artificial intelligence-based instructor avatar can communicate with a given student in the student's native language and/or regional dialect (e.g., without a foreign accent)”; [0053], “based on the student's determined level of engagement, the artificial intelligence-based instructor avatar's presented facial expressions can change according to the inputs and/or needs of the student”; [0057], “one or more customized avatar responses (for example, changes in how the avatar looks or sounds (e.g., in reaction a student answering a question correctly, in response to a student who may not be paying close attention, etc.), based on factors such as cultural norms, student preferences, etc., can be produced in addition to and/or independent of the speech-based questions”; [0068], “one or more modifications to at least one instructor avatar includes configuring communication to the user through the at least one instructor avatar in a language preferred by the user, modifying at least one facial expression exhibited by the at least one instructor avatar to the user, and/or modifying a tone of communication output from the at least one instructor avatar to the user”).
Therefore, in view of Sha, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Naufel, by changing the avatar’s appearance and tone as taught by Sha, since Sha suggests that based on the student's determined level of engagement, the artificial intelligence-based instructor avatar's presented facial expressions can change according to the inputs and/or needs of the student. Additionally, the artificial intelligence-based instructor avatar's output expression and/or tone of response(s) can similarly be modified based at least in part on assessing which types of responses have historically been more helpful for similar scenarios (Sha, [0053]).
Response to Arguments
Applicant's arguments filed 12/12/2025 have been fully considered but they are not persuasive.
Applicant argues:
First, the cited prior art does not disclose or suggest "assigning different labels to different content from the plurality of content based on a classification of the different content". No content classification, tagging, or labeling is mentioned in Aslan, Sha, or the other cited references.
Second, the cited prior art does not disclose or suggest "generating a first set of customized content associated with a first part of the particular subject from a generative artificial intelligence (AI) that selects and customizes a first set of content from the plurality of content based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user, wherein generating the first set of customized content comprises generating new materials by modifying the first set of content".
the cited references do not disclose or suggest selecting the content to customize and present "based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user". As noted above, Aslan and the other cited references do not disclose or suggest assigning labels to the selectable content based on a classification of the content. Accordingly, the selection of content in Aslan and the other cited references cannot be "based on one or more labels assigned to the first set of content being associated with custom learning preferences of the user".
Third, the cited references do not disclose or suggest: "determining one or more changes to the custom learning preferences in response to tracking in real-time positive engagement that the user has with a subset of content in the first set of customized content of a first type, a first format, or a first presentation; customizing a second set of customized content associated with a second part of the particular subject to differ from the first set of customized content in real-time while presenting the first set of customized content in response to the one or more changes to the custom learning preferences resulting from the real-time positive engagement, wherein customizing the second set of customized content comprises selecting a second set of content from the plurality of content with one or more labels that are associated with the one or more changes to the custom learning preferences and that correspond to content of the first type, the first format, or the first presentation"
Applicant's arguments filed 12/12/2025 have been fully considered but they are not persuasive. The newly cited reference Naufel (US 2024/0379019 A1) teaches different labels for different contents (Naufel, [0464], “Metadata for various media types”; [0302], “Tag Generation: Media processing and tagging module 225 automatically generates a set of relevant tags based on the analysis of the media files. Media processing and tagging module 225 ensures that the generated tags accurately represent the key themes, topics, and concepts of such content to facilitate effective indexing and categorization”; [0303], “Content Indexing and Categorization: Media processing and tagging module 225 integrates the generated tags with graph database 230 of learning platform 200 to index and categorize the media files. Media processing and tagging module 225 enables educational content to be organized in a manner that makes it easy for users to search and access the information they need”; [0304] – [0306]).
Aslan teaches changing content based on engagement level of the student (Aslan, fig. 2, 252, 250; [0043], “The change in instructional content type may be to an instructional content type defined in the user profile of the respective user or may be determined based on the evolving user state model”; [0044], “If a user's level of engagement drops below the previously described threshold, then adaptation module 124 may cooperate with instruction module 128 to dynamically adapt the instructional content and change the content type from a current content type”; fig. 5; [0014], “The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. For example, the instruction module may present instructional content to the user in the form of a multimedia presentation … the adaptation module may monitor an engagement level of the user. If the adaptation module determines that the user's engagement level is decreasing, the adaptation module may cooperate with the instruction module to adapt the instructional content presentation to an interactive presentation, such as a game, in an effort to increase the engagement level of the use”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACK YIP/Primary Examiner, Art Unit 3715