Prosecution Insights
Last updated: April 19, 2026
Application No. 18/734,860

SYSTEMS AND METHODS FOR GENERATING ADAPTIVE ARTIFICIAL INTELLIGENCE-BASED COURSE TEMPLATES USING REAL-TIME FEEDBACK

Final Rejection §101§103
Filed
Jun 05, 2024
Examiner
YONO, RAVEN E
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pearson Education Inc.
OA Round
2 (Final)
39%
Grant Probability
At Risk
3-4
OA Rounds
2y 6m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
69 granted / 175 resolved
-12.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
207
Total Applications
across all art units

Statute-Specific Performance

§101
40.5%
+0.5% vs TC avg
§103
31.3%
-8.7% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims • This action is in reply to the amendments filed on December 8, 2025. • Claims 1-2, 9-18, and 20 have been amended and are hereby entered. • Claims 1-20 are currently pending and have been examined. • This action is made FINAL. Information Disclosure Statement The information disclosure Statement(s) filed on 08/19/2025 have been considered. Initialed copies of the Form 1449 are enclosed herewith. Response to Arguments Applicant’s arguments filed December 8, 2025 have been fully considered but they are not persuasive. Applicant’s arguments with respect to 35 USC § 101 have been fully considered and are not persuasive. Regarding Applicant’s argument on pages 9-10, that the claims do not recite an abstract idea, the Examiner respectfully disagrees. As indicated in the 35 USC § 101 rejection below, the claimed inventions allows for allowing an instructor user to create an education course for learner users. The Specification at [0013] states: “Existing course structures are typically static, meaning they don't evolve based on learner feedback or performance data. This rigidity can result in content becoming outdated or less effective over time. Updating course content has traditionally been a manual and time-consuming process, often requiring significant effort from educators and instructional designers. This process can be inefficient, and slow or unable to respond to emerging educational needs. While some level of personalization is possible in modern educational tools, the tools often lack depth and real-time adaptability, limiting their effectiveness in addressing individual learning styles and needs. Many educational platforms collect vast amounts of data on student engagement and performance, but this data is often underutilized in informing course design and adaptation of content to individual learning styles, preferences, or approaches. Accordingly, some technical challenges in the field of course authoring software and systems include implementation of one-size-fits-all course design, static course structures, manual course update processes, limited personalization ability (if at all), ineffective utilization of data, and the like.” The Specification and claims focus on an improvement to the process of instructors creating courses for students, which is managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions, which falls within the category of Certain Methods of Organizing Human Activity and therefore is an abstract idea. Regarding Applicant’s arguments on pages 10-12, that the claims integrate a practical application, the Examiner respectfully disagrees. Under the Patent Subject Matter Eligibility analysis, Step 2A, prong two, integration into a practical application requires an additional element(s) or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Limitations that are not indicative of integration into a practical application are those that generally link the use of the judicial exception into a particular technological environment or field of use-see MPEP 2106.05(h). Here the claims recite a system for implementing adaptive artificial intelligence-based course template generation, the system comprising: a processing system including one or more electronic processors, the processing system configured to perform claim functions; a processing system including one or more electronic processors; a non-transitory, computer-readable medium storing instructions that, when executed by a processing system including one or more electronic processors, perform a set of functions, the set of functions comprising claim functions; a retriever-augmented generation (RAG) model; an artificial intelligence (AI) engine; one or more databases; a communication network; a client device; display via a graphical user interface such that they amount to no more than generally linking the use of the judicial exception to a particular technological environment or field of use (e.g., a computer network) (see MPEP 2106.05(h)). Furthermore, and in response to Applicant’s arguments on pages 11-12 where Applicant argues the claims reflect improvement to artificial intelligence technology, in determining whether a claim integrates a judicial exception into a practical application, a determination is made of whether the claimed invention pertains to an improvement in the functioning of the computer itself or any other technology or technical field (i.e., a technological solution to a technological problem). Here, the claims recite generic computer components, i.e., a generic processor, a memory storing a computer program executable by the processor to perform the claimed method steps and system functions. The processor, memory and system are recited at a high level of generality and are recited as performing generic computer functions customarily used in computer applications. Furthermore, the Specification describes a problem and improvement to a business or commercial process at least at [0013], describing underutilizing data informing course design and adaptation to individual learning styles, and addressing challenges of one-size-fits-all course design, static course structures, manual course update processes, limited personalization ability (if at all), and ineffective utilization of data. Regarding Applicant’s arguments on page 12, that the claims recite significantly more, the Examiner respectfully disagrees. The limitations are directed to an abstract idea and when determining if the claims are directed to significantly more, the additional limitations of the claims in addition to the abstract idea are analyzed. In the instant application, the additional elements of the claim include a system for implementing adaptive artificial intelligence-based course template generation, the system comprising: a processing system including one or more electronic processors, the processing system configured to perform claim functions; a processing system including one or more electronic processors; a non-transitory, computer-readable medium storing instructions that, when executed by a processing system including one or more electronic processors, perform a set of functions, the set of functions comprising claim functions; a retriever-augmented generation (RAG) model; an artificial intelligence (AI) engine; one or more databases; a communication network; a client device; display via a graphical user interface. The additional limitations, when considered both individually and in combination, do not affect an improvement to another technology or technological field; the claims do not amount to an improvement to the functioning of the computer itself; and the claims do not move beyond a general link of use of an abstract idea to a particular technological environment. Therefore, the claims merely amount to merely generally linking the use of the abstract idea to a particular technological environment or field of use (e.g., a computer network), and is considered to amount to nothing more than requiring a generic computer network to carry out the abstract idea itself. The specifics about the abstract idea do not overcome the rejection. Regarding Applicant’s arguments on page 12 that the claims do not attempt to preempt all ways of performing the abstract idea, the argument has been considered and is not persuasive. In response to this argument, it is noted, “while preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility.” Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379 (Fed. Cir. 2015). The instant application is reviewed within the framework of the Revised Guidance which specifies and particularizes the Mayo/Alice framework. Regarding Applicant’s arguments on page 12 that a finding of significantly more is supported by the novelty and non-obviousness of the claims, the argument has been considered and is not persuasive. As an initial matter, the claims are not found to be non-obvious, in view of the 103 rejection below. Furthermore in response to this argument, it is noted that the inventiveness inquiry of § 101 should not be confused with the separate novelty inquiry of § 102 or obviousness inquiry of § 103. A novel and non-obvious claim directed to a purely abstract idea is, nonetheless, patent ineligible. See Mayo, 566 U.S. at 79. “Even assuming that is true, it does not avoid the problem of abstractness.” Affinity Labs, 838 F.3d at 1263; Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716 (Fed. Cir. 2014) (“That some of [these] steps were not previously employed in this art is not enough—standing alone—to confer patent eligibility upon the claims ”). Indeed, “a claim for a new abstract idea is still an abstract idea.” Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (explaining that the search for an inventive concept under § 101 is distinct from demonstrating novelty under § 102). The claims are not patent eligible. Applicant’s arguments with respect to 35 USC § 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. For the reasons above, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea without significantly more. Independent claims 1, 14, and 17 are directed to a system (claim 1), a method (claim 14), and an apparatus (claim 17). Therefore, on its face, each independent claim 1, 14, and 17 are directed to a statutory category of invention under Step 1 of the Patent Subject Matter Eligibility analysis (see MPEP 2106.03). Under Step 2A, Prong One of the Patent Subject Matter Eligibility analysis (see MPEP 2106.04), claims 1, 14, and 17 recite, in part, a system, a method, and an apparatus of organizing human activity. Claim 1 recites a system for implementing adaptive artificial intelligence-based course template generation; receive a request to generate a first course template for a course; retrieve, with a model, user data that is contextually relevant to the request, wherein the retrieved user data comprises one or more of instructor user profiles and one or more learner user profiles associated with one or more learner users of the course; synthesize the user data to determine a set of patterns for the user data; generate a set of recommendations based on the set of patterns; generate, based on the set of recommendations, a first course template for the course, wherein the first course template is personalized for the one or more learner users; generate a first set of learning course content that adheres to the first course template for the course; and transmit the first set of learning course content for display as a learning course content. Claim 14 recites a method of implementing adaptive artificial intelligence-based course template generation, the method comprising: retrieving, using a model, user data that is contextually relevant to a course, wherein the retrieved user data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course; receiving, while the course is in progress, feedback data associated with a first set of learning course content for the course, the first set of learning course content adhering to the first course template for the course; providing the feedback data in order to determine a recommended course template modification; generating, a second course template for the course based on the recommended course template modification; generating a second set of learning course content that adheres to the second course template for the course; and transmitting the second set of learning course content for display as a learning course content. Claim 17 recites receiving a request to generate a first course template for a course; retrieving, with a model, user data that is contextually relevant to the request, wherein the retrieved user data comprises one or more instructor profiles and one or more learner user profiles associated with one or more learner users of the course; generating, the first course template for the course, the first course template identifying a first set of learning course content that adheres to the first course template for the course, wherein the first course template is personalized for the one or more learner users; transmitting the first set of learning course content for display as a learning course content; receiving feedback data associated with the first set of learning course content; generating, a second course template for the course based on the feedback data, the second course template identifying a second set of learning course content that adheres to the second course template for the course; and transmitting the second set of learning course content for display. The limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers managing personal behavior or relationships of interactions between people (certain methods of organizing human activity), but for the recitation of generic computer components. The claims as a whole recite a method of organizing human activity. The claimed inventions allows for allowing an instructor user to create an education course for learner users, which is managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. The mere nominal recitation of an AI engine does not take the claim out of the methods of organizing human activity grouping. Thus, the claims recite an abstract idea. Under Step 2A, Prong Two of the Patent Subject Matter Eligibility analysis (see MPEP 2106.04), the judicial exception is not integrated into a practical application. In particular, the additional elements of a system for implementing adaptive artificial intelligence-based course template generation, the system comprising: a processing system including one or more electronic processors, the processing system configured to perform claim functions; a processing system including one or more electronic processors; a non-transitory, computer-readable medium storing instructions that, when executed by a processing system including one or more electronic processors, perform a set of functions, the set of functions comprising claim functions; a retriever-augmented generation (RAG) model; an artificial intelligence (AI) engine; one or more databases; a communication network; a client device; display via a graphical user interface are recited at a high-level of generality (i.e., as a generic computer components performing generic computer functions receiving a request to generate a course and generate recommendation and a course template based on the request user data and provide the course to a user) such that it amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use (e.g., a computer network).-see MPEP 2106.05(h). Accordingly, the combination of the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under Step 2B of the Patent Subject Matter Eligibility analysis (see MPEP 2106.05), the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the claims amount to no more than generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Generally linking the use of the judicial exception to a particular technological environment or field of use using generic computer components cannot provide an inventive concept. The claims are not patent eligible. The dependent claims have been given the full two part analysis including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because for the same reasoning as above and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea Dependent 2-13, 15-16, and 18-20 simply help to define the abstract idea. The additional limitations of the dependent claim(s) when considered individually and as an ordered combination do not amount to significantly more than the abstract idea. Viewing the claim limitations as an ordered combination does not add anything further than looking at the claim limitations individually. When viewed either individually, or as an ordered combination, the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea. Accordingly, claims 1-20 are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 10-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20240370804 A1 (“Wolochow”) in view of US 20240379019 A1 (“Naufel”). Regarding claim 1, Wolochow discloses a system for implementing adaptive artificial intelligence-based course template generation, the system comprising: a processing system including one or more electronic processors, the processing system configured to (see at least [0093]-[0095].): receive a request to generate a first course template for a course (Receiving course input data, see at least [0075] and FIG. 5A, step 501. Prompting to generate a course, see at least [0014].); retrieve, with a model of an artificial intelligence (AI) engine, user data that is contextually relevant to the request (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025].), synthesize, with the AI engine, the user data to determine a set of patterns for the user data (Prompting an LLM to perform a generative AI process for creating a refined list of course-level learning objectives. As described herein, the course-level learning objectives can be based on input received from the author, for example, information about the course topic, learning objectives input by the author, a target audience for the course, a target duration for the course, and content items provided by the author. See at least [0077]. See also FIG. 5B, step 510.); generate, with the AI engine, a set of recommendations based on the set of patterns (An LLM can be prompted to perform a generative AI process for creating a module-level outline for the course that includes a list of module titles and descriptions of the modules based on the learning objectives generated in stage 510 and based on content items provided by the author. See at least [0078]. See also FIG. 5B, step 512.); generate, based on the set of recommendations, the first course template for the course (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.), wherein the first course template is personalized for the one or more learner users (Customizing the course, see at least [0074]. See also [0023] and [0028]. Customized courses presented for the students, see at least [0031].); generate a first set of learning course content that adheres to the first course template for the course (A backend process can kickoff a process for searching for existing course content within database repositories of an online learning platform based on information generated in upstream steps of the workflow. An LLM can be prompted to perform a generative AI process for creating item-level learning objectives based on course-level and module-level learning objectives generated in upstream steps of the workflow. See at least [0080]-[0081]. See also FIG. 5B, step 516-518.); and transmit, via a communication network, the first set of learning course content to a client device for display as a learning course content rendering via a graphical user interface (the user interface can provide a summary of the content, the name of the source course and institution or educator for the content, the length of the content, and other such descriptive metadata. In addition, the user interface can enable the opportunity to review the content itself, for example, by clicking a link to play video content within the user interface or to display text content within the user interface. See at least [0070]. See also [0071]-[0072]. See also [0096]-[0097].). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG) model. Furthermore, while Wolochow discloses retrieving data, Wolochow does not expressly disclose retrieving from one or more databases. Furthermore, while Wolochow discloses retrieved user data, Wolochow does not expressly disclose the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course. However, Naufel discloses a retriever-augmented generation (RAG) model (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].); retrieving from one or more databases (extracting data from the database, see at least [0579].); the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course (Computing device may obtain new student learner data. For example, processing circuitry of computing device may obtain, by the processing circuitry, new student learner data about a new student learner subscribed to the educational content provided by the learning platform. See at least [0573]. Educator profiles, see at least [0362]. User profiles, see at least [0296].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a retriever-augmented generation model, as taught by Naufel, and to modify the retrieving of data of Wolochow to retrieve from a database, as taught by Naufel, and to modify the data of Wolochow to comprise the instructor profile and learner profile data as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 2, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses the AI engine includes: a model configured to identify the user data that is contextually relevant to the request and synthesize the user data to determine the set of patterns for the user data (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025]. Prompting an LLM to perform a generative AI process for creating a refined list of course-level learning objectives. As described herein, the course-level learning objectives can be based on input received from the author, for example, information about the course topic, learning objectives input by the author, a target audience for the course, a target duration for the course, and content items provided by the author. See at least [0077]. See also FIG. 5B, step 510.); and a recommendation model configured to generate the set of recommendations based on the set of patterns (An LLM can be prompted to perform a generative AI process for creating a module-level outline for the course that includes a list of module titles and descriptions of the modules based on the learning objectives generated in stage 510 and based on content items provided by the author. See at least [0078]. See also FIG. 5B, step 512.). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG). However, Naufel discloses a retriever-augmented generation (RAG) (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a RAG model, as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 3, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses receive feedback data associated with the first set of learning course content (A backend process can organize the best content items for modules of the course into an overall course structure draft, an LLM can be prompted to edit the course structure draft, for example, to flag the occurrence of duplicate or missing content items in the draft for a particular item-level learning objective for to flag the existence of any required content items uploaded by the author that are missing from the course structure draft. Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089] and FIG. 5C, step 538-540. A front-end process can elicit author feedback on the course draft. For example, the author may edit, add, or delete course content items that have been suggested for the course, and can edit, add, or delete module-level and item-level learning objectives for the course. Based on the feedback from the author, upstream steps of the workflow can be repeated to prepare a modified draft course structure. See at least [0090] and see also FIG. 5C, step 542.); generate, with the AI engine, a second course template for the course based on the feedback data; generate a second set of learning course content that adheres to the second course template for the course (Once the author is satisfied with the draft course structure and the content items associated with the draft course, an LLM can be prompted, to generate metadata and/or additional content items for the course. For example, the LLM can be prompted to generate course title, an engaging description of the course and its modules and learning objectives, and keywords that can be associated with the course that may be used to respond to search queries by learners looking for a course about a particular topic or addressing one or more particular learning objectives. In addition, an LLM can be prompted to generate additional course materials, such as, for example, quizzes, coding exercises, problem sets, surveys, etc., based on the content items for the course. A front-end process can elicit author feedback on the final course draft, and once the author is satisfied with the final course, the workflow and the course content can be processed to create a course that fits the technical standards of the online learning platform. See at least [0091]-[0092].); transmit the second set of learning course content for display (The user interface can provide a summary of the content, the name of the source course and institution or educator for the content, the length of the content, and other such descriptive metadata. In addition, the user interface can enable the opportunity to review the content itself, for example, by clicking a link to play video content within the user interface or to display text content within the user interface. See at least [0070]. See also [0071]-[0072]. See also [0096]-[0097].). Regarding claim 4, the combination of Wolochow and Naufel discloses the limitations of claim 3, as discussed above, and Wolochow further discloses the second course template is different from the first course template, and the first course template and the second course template comply with the same learning objective of the course (Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089].). Regarding claim 5, the combination of Wolochow and Naufel discloses the limitations of claim 3, as discussed above, and Wolochow further discloses the feedback data includes user data for a user of the course, the user data including at least one of data describing an interaction of the user with the first set of learning course content, a performance metric of the user, or qualitative feedback provided by the user (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.) While Wolochow discloses data of a user of the course, and while Wolochow discloses learner users (see [0031] at Wolochow describing students using the modules), Wolochow does not expressly disclose the user is a learner user. However, Naufel discloses the user is a learner user (student learner using the learning application via a computing device, see at least [0375].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the user of Wolochow to be a learner user, as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 6, the combination of Wolochow and Naufel discloses the limitations of claim 3, as discussed above, and Wolochow further discloses the feedback data includes instructor user data for an instructor user of the course, the instructor user data including a preference of the instructor user (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.). Regarding claim 7, the combination of Wolochow and Naufel discloses the limitations of claim 3, as discussed above, and Wolochow further discloses the second course template includes additional course content not included in the first course template (Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089]. Adding additional material, see at least [0027].). Regarding claim 10, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses the user data includes a course criterion established by an instructor user of the course (Receiving course input data, see at least [0075] and FIG. 5A, step 501. Inputted data includes course objections, see at least [0075] and [0077].), and wherein the processing system is configured to determine an impact of the course criterion on the one or more learner users of the course (A backend process can kickoff a process for searching for existing course content within database repositories of an online learning platform based on information generated in upstream steps of the workflow. An LLM can be prompted to perform a generative AI process for creating item-level learning objectives based on course-level and module-level learning objectives generated in upstream steps of the workflow. See at least [0080]-[0081]. See also FIG. 5B, step 516-518.). Regarding claim 11, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses the user data includes a recording of an instructor user of the course (Collecting input from an author, the input may include video uploads, see at least [0033]. Videos may be recordings of previous lectures of an author, see at least [0026].), and wherein the processing system is configured to generate the course template based on the recording and generate, on a personalized basis for the one or more learner users, the first set of learning course content based on the recording to emulate a teaching style of the instructor user (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025]. Prompting an LLM to perform a generative AI process for creating a refined list of course-level learning objectives. As described herein, the course-level learning objectives can be based on input received from the author, for example, information about the course topic, learning objectives input by the author, a target audience for the course, a target duration for the course, and content items provided by the author. See at least [0077]. See also FIG. 5B, step 510.). Regarding claim 12, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses when the request identifies a first learner user, the user data includes user data included in a learner user profile of the first learner user and the first course template is generated for the first learner user such that the first course template is personalized for the first learner user (Collecting input from an author, the input may include information about a target audience, see at least [0024] and [0046]-[0047], disclosing creating a course for a target audience, such as College Freshmen. Searching for items to add to the course based on the target audience, see at least [0061]. The Examiner interprets the target audience of a college freshman as a learner profile for a first user.). Regarding claim 13, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above, and Wolochow further discloses when the request identifies a group of learner users, the user data includes user data included in a plurality of learner user profiles for the group of learner users and the first course template is generated for the group of learner users such that the first course template is personalized for the group of learner users. (Collecting input from an author, the input may include information about a target audience, see at least [0024] and [0046]-[0047], disclosing creating a course for a target audience, such as College Freshmen. Searching for items to add to the course based on the target audience, see at least [0061]. The Examiner interprets the target audience of a college freshman as a learner profile for a group of learner users.). Regarding claim 14, Wolochow discloses a method of implementing adaptive artificial intelligence-based course template generation, the method comprising: retrieving, with a processing system including one or more electronic processors, using a model of an artificial intelligence (AI) engine, user data that is contextually relevant to a course (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025].), generating, using the AI engine, a first course template for the course based on the user data (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.), wherein the first course template is personalized for the one or more learner users (Customizing the course, see at least [0074]. See also [0023] and [0028]. Customized courses presented for the students, see at least [0031].); receiving, with the processing system, while the course is in progress, feedback data associated with a first set of learning course content for the course (a backend process for kicking off (i.e., launching) a course builder job. A course builder progress store can store data about the progress of the course builder job, and a frontend tool can pool job progress data and process that data, to provide information to the author through a user interface about progress of the workflow for creating the educational course. See at least [0076] and FIG. 5A. a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. The Examiner interprets receiving the feedback data while the course builder is building a course as receiving while a course is in progress.), the first set of learning course content adhering to the first course template for the course (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.), providing, with the processing system, the feedback data to the AI engine in order to determine a recommended course template modification (A backend process can organize the best content items for modules of the course into an overall course structure draft, an LLM can be prompted to edit the course structure draft, for example, to flag the occurrence of duplicate or missing content items in the draft for a particular item-level learning objective for to flag the existence of any required content items uploaded by the author that are missing from the course structure draft. Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089] and FIG. 5C, step 538-540.); generating, with the processing system, using the AI engine, a second course template for the course based on the recommended course template modification (a front-end process can elicit author feedback on the course draft. For example, the author may edit, add, or delete course content items that have been suggested for the course, and can edit, add, or delete module-level and item-level learning objectives for the course. Based on the feedback from the author, upstream steps of the workflow can be repeated to prepare a modified draft course structure. See at least [0090] and see also FIG. 5C, step 542.); generating, with the processing system, a second set of learning course content that adheres to the second course template for the course (Once the author is satisfied with the draft course structure and the content items associated with the draft course, an LLM can be prompted, to generate metadata and/or additional content items for the course. For example, the LLM can be prompted to generate course title, an engaging description of the course and its modules and learning objectives, and keywords that can be associated with the course that may be used to respond to search queries by learners looking for a course about a particular topic or addressing one or more particular learning objectives. In addition, an LLM can be prompted to generate additional course materials, such as, for example, quizzes, coding exercises, problem sets, surveys, etc., based on the content items for the course. A front-end process can elicit author feedback on the final course draft, and once the author is satisfied with the final course, the workflow and the course content can be processed to create a course that fits the technical standards of the online learning platform. See at least [0091]-[0092].). transmitting, with the processing system via a communication network, the second set of learning course content to a client device for display as a learning course content rendering via a graphical user interface (the user interface can provide a summary of the content, the name of the source course and institution or educator for the content, the length of the content, and other such descriptive metadata. In addition, the user interface can enable the opportunity to review the content itself, for example, by clicking a link to play video content within the user interface or to display text content within the user interface. See at least [0070]. See also [0071]-[0072]. See also [0096]-[0097].). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG) model. Furthermore, while Wolochow discloses retrieving data, Wolochow does not expressly disclose retrieving from one or more databases. Furthermore, while Wolochow discloses retrieved user data, Wolochow does not expressly disclose the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course. However, Naufel discloses a retriever-augmented generation (RAG) model (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].); retrieving from one or more databases (extracting data from the database, see at least [0579].); the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course (Computing device may obtain new student learner data. For example, processing circuitry of computing device may obtain, by the processing circuitry, new student learner data about a new student learner subscribed to the educational content provided by the learning platform. See at least [0573]. Educator profiles, see at least [0362]. User profiles, see at least [0296].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a retriever-augmented generation model, as taught by Naufel, and to modify the retrieving of data of Wolochow to retrieve from a database, as taught by Naufel, and to modify the data of Wolochow to comprise the instructor profile and learner profile data as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 15, the combination of Wolochow and Naufel discloses the limitations of claim 14, as discussed above, and Wolochow further discloses synthesizing, with the model of the AI engine, the user data (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025]. Prompting an LLM to perform a generative AI process for creating a refined list of course-level learning objectives. As described herein, the course-level learning objectives can be based on input received from the author, for example, information about the course topic, learning objectives input by the author, a target audience for the course, a target duration for the course, and content items provided by the author. See at least [0077]. See also FIG. 5B, step 510.); determining, with the model of the AI engine, a set of patterns for the user data (An LLM can be prompted to perform a generative AI process for creating a module-level outline for the course that includes a list of module titles and descriptions of the modules based on the learning objectives generated in stage 510 and based on content items provided by the author. See at least [0078]. See also FIG. 5B, step 512.); and generating, with a recommendation model, a set of recommendations based on the set of patterns, the set of recommendations including the recommended course template modification (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG). However, Naufel discloses a retriever-augmented generation (RAG) (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a RAG model, as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 16, the combination of Wolochow and Naufel discloses the limitations of claim 14, as discussed above, and Wolochow further discloses the second course template is different from the first course template, wherein the first course template and the second course template comply with a course criterion established by an instructor user of the course (Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089].). Regarding claim 17, Wolochow discloses a non-transitory, computer-readable medium storing instructions that, when executed by a processing system including one or more electronic processors, perform a set of functions, the set of functions comprising (see at least [0093]-[0095]): receiving a request to generate a first course template for a course (Receiving course input data, see at least [0075] and FIG. 5A, step 501. Prompting to generate a course, see at least [0014].); retrieving, with a model of an artificial intelligence (AI) engine, user data that is contextually relevant to the request (Identifying inputted data entered by the user, see at least [0075]-[0076]. AI engine, see at least [0023] and [0025].), generating, using the AI engine, the first course template for the course (a backend process for kicking off (i.e., launching) a course builder job. A course builder progress store can store data about the progress of the course builder job, and a frontend tool can pool job progress data and process that data, to provide information to the author through a user interface about progress of the workflow for creating the educational course. See at least [0076] and FIG. 5A. a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]), the first course template identifying a first set of learning course content that adheres to the first course template for the course (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.), wherein the first course template is personalized for the one or more learner users (Customizing the course, see at least [0074]. See also [0023] and [0028]. Customized courses presented for the students, see at least [0031].); transmitting the first set of learning course content for display as a learning course content rendering via a graphical user interface (the user interface can provide a summary of the content, the name of the source course and institution or educator for the content, the length of the content, and other such descriptive metadata. In addition, the user interface can enable the opportunity to review the content itself, for example, by clicking a link to play video content within the user interface or to display text content within the user interface. See at least [0070]. See also [0071]-[0072]. See also [0096]-[0097].); receiving feedback data associated with the first set of learning course content (A backend process can organize the best content items for modules of the course into an overall course structure draft, an LLM can be prompted to edit the course structure draft, for example, to flag the occurrence of duplicate or missing content items in the draft for a particular item-level learning objective for to flag the existence of any required content items uploaded by the author that are missing from the course structure draft. Thus, based on the availability of course content items responsive to the initial course structure draft, the LLM can be prompted to modify the course structure draft to best achieve the goals of the author, given the course content items that are available. Then, based on the modified course structure draft, other upstream steps of the workflow can be rerun. See at least [0089] and FIG. 5C, step 538-540. A front-end process can elicit author feedback on the course draft. For example, the author may edit, add, or delete course content items that have been suggested for the course, and can edit, add, or delete module-level and item-level learning objectives for the course. Based on the feedback from the author, upstream steps of the workflow can be repeated to prepare a modified draft course structure. See at least [0090] and see also FIG. 5C, step 542.); generating, with the AI engine, a second course template for the course based on the feedback data, the second course template identifying a second set of learning course content that adheres to the second course template for the course (Once the author is satisfied with the draft course structure and the content items associated with the draft course, an LLM can be prompted, to generate metadata and/or additional content items for the course. For example, the LLM can be prompted to generate course title, an engaging description of the course and its modules and learning objectives, and keywords that can be associated with the course that may be used to respond to search queries by learners looking for a course about a particular topic or addressing one or more particular learning objectives. In addition, an LLM can be prompted to generate additional course materials, such as, for example, quizzes, coding exercises, problem sets, surveys, etc., based on the content items for the course. A front-end process can elicit author feedback on the final course draft, and once the author is satisfied with the final course, the workflow and the course content can be processed to create a course that fits the technical standards of the online learning platform. See at least [0091]-[0092].). transmitting the second set of learning course content for display (the user interface can provide a summary of the content, the name of the source course and institution or educator for the content, the length of the content, and other such descriptive metadata. In addition, the user interface can enable the opportunity to review the content itself, for example, by clicking a link to play video content within the user interface or to display text content within the user interface. See at least [0070]. See also [0071]-[0072]. See also [0096]-[0097].). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG) model. Furthermore, while Wolochow discloses retrieving data, Wolochow does not expressly disclose retrieving from one or more databases. Furthermore, while Wolochow discloses retrieved user data, Wolochow does not expressly disclose the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course. However, Naufel discloses a retriever-augmented generation (RAG) model (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].); retrieving from one or more databases (extracting data from the database, see at least [0579].); the data comprises one or more instructor user profiles and one or more learner user profiles associated with one or more learner users of the course (Computing device may obtain new student learner data. For example, processing circuitry of computing device may obtain, by the processing circuitry, new student learner data about a new student learner subscribed to the educational content provided by the learning platform. See at least [0573]. Educator profiles, see at least [0362]. User profiles, see at least [0296].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a retriever-augmented generation model, as taught by Naufel, and to modify the retrieving of data of Wolochow to retrieve from a database, as taught by Naufel, and to modify the data of Wolochow to comprise the instructor profile and learner profile data as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Regarding claim 18, the combination of Wolochow and Naufel discloses the limitations of claim 17, as discussed above, and Wolochow further discloses generating the first course template for the course is based on: synthesizing, with the model, the user data to determine a set of patterns for the user data (An LLM can be prompted to perform a generative AI process for creating a module-level outline for the course that includes a list of module titles and descriptions of the modules based on the learning objectives generated in stage 510 and based on content items provided by the author. See at least [0078]. See also FIG. 5B, step 512.); and generating, with a recommendation model of the AI engine, a set of recommendations based on the set of patterns (a front-end process of the content generation module can elicit feedback from the author on the module-level outline for the course. For example, the author may edit the titles and descriptions of the modules and/or the learning objectives for the modules. Based on the feedback from the author, upstream steps of the workflow can be repeated to refine the module-level outline for the course. See at least [0079]. See also FIG. 5B, step 514.). While Wolochow discloses a model, Wolochow does not expressly disclose a retriever-augmented generation (RAG). However, Naufel discloses a retriever-augmented generation (RAG) (model system may leverage an enhanced Retrieval-Augmented Generation (RAG) process to dynamically query, retrieve, and visualize data, facilitating both breadth and depth in data exploration and insights generation, with the capability to address complex user queries that benefit from understanding the interplay between different data points and their attributes. See at least [0077].). From the teaching of Naufel, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the model of Wolochow to be a RAG model, as taught by Naufel, in order to address complex user queries that benefit from understanding the interplay between different data points and their attributes (see Naufel at least at [0077]), and in order to provide a platform having functionality for dynamically tailoring learning content, pathways, and experiences for individual learners, thus providing a more effective and engaging educational experience (see Naufel at least at [0035]), and in order to improve comprehensive personalization, accessibility, and scalability of addressing learning styles of students (see Naufel at least at [0036]-[0038]). Claim 20 has similar limitations found in claim 16 above, and therefore is rejected by the same art and rationale. Claims 8-9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wolochow in view of Naufel, and in further view of US 20230306859 A1 (“Foias”). Regarding claim 8, the combination of Wolochow and Naufel discloses the limitations of claim 3, as discussed above. Wolochow does not expressly disclose transmit the first set of learning course content to a first client device of a first learner user and a second client device of a second learner user, and, when the feedback data indicates that the second learner user achieved a performance metric below a performance threshold, transmit the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content. However, Foias discloses transmit the first set of learning course content to a first client device of a first learner user and a second client device of a second learner user (multiple students using a computer to access online course, see at least [0018]-[0022] and [0067].), and, when the feedback data indicates that the second learner user achieved a performance metric below a performance threshold, transmit the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content (Receiving data indicating that user has failed quiz, and if user has failed quiz then require user to watch video. See at least [0021]-[0023]. See also [0055] and [0069].). From the teaching of Foias, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify Wolochow to transmit the learning course to client device of learners, as taught by Foias, and to modify the feedback data of Wolochow to, when the data indicates that the second learner user achieved a performance metric below a performance threshold, transmit the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content, as taught by Foias, in order to improve learning experience for students (see Foias at least at [0003] and [0014]). Regarding claim 9, the combination of Wolochow and Naufel discloses the limitations of claim 1, as discussed above. Wolochow does not expressly disclose develop and maintain the one or more learner user profiles, each learner user profile of the one or more learner user profiles being specific to a specific learner user and including at least one of learner user interaction data, performance metric data, or qualitative feedback data. However, Foias discloses develop and maintain the one or more learner user profiles, each learner user profile of the one or more learner user profiles being specific to a specific learner user and including at least one of learner user interaction data, performance metric data, or qualitative feedback data (Storing student data in a database for each student, see at least [0074]-[0075], see also [0021]-[0023] and [0071]-[0077] generally describing each student with specific coursework depending on that students quiz grade.). From the teaching of Foias, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify Wolochow to develop and maintain a plurality of learner user profiles, each learner user profile of the plurality of learner user profiles being specific to a specific learner user and including at least one of learner user interaction data, performance metric data, or qualitative feedback data, as taught by Foias, in order to improve learning experience for students (see Foias at least at [0003] and [0014]). Regarding claim 19, the combination of Wolochow and Naufel discloses the limitations of claim 17, as discussed above. Wolochow does not expressly disclose transmitting the first set of learning course content includes transmitting the first set of learning course content to a first client device of a first learner user of the course and a second client device of a second learner user of the course, and wherein transmitting the second set of learning course content includes, when the feedback data indicates that the second learner user achieved a performance metric below a performance threshold, transmitting the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content. However, Foias discloses transmitting the first set of learning course content includes transmitting the first set of learning course content to a first client device of a first learner user of the course and a second client device of a second learner user of the course (multiple students using a computer to access online course, see at least [0018]-[0022] and [0067].), and wherein transmitting the second set of learning course content includes, when the feedback data indicates that the second learner user achieved a performance metric below a performance threshold, transmitting the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content (Receiving data indicating that user has failed quiz, and if user has failed quiz then require user to watch video. See at least [0021]-[0023]. See also [0055] and [0069].). From the teaching of Foias, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify Wolochow to transmit the learning course to client device of learners, as taught by Foias, and to modify the feedback data of Wolochow to, when the data indicates that the second learner user achieved a performance metric below a performance threshold, transmit the second set of learning course content to the second client device of the second learner user, wherein the second set of learning course content includes supplemental course content, as taught by Foias, in order to improve learning experience for students (see Foias at least at [0003] and [0014]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAVEN E YONO whose telephone number is (313)446-6606. The examiner can normally be reached Monday - Friday 8-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett M Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAVEN E YONO/Primary Examiner, Art Unit 3694
Read full office action

Prosecution Timeline

Jun 05, 2024
Application Filed
Jul 21, 2025
Non-Final Rejection — §101, §103
Dec 08, 2025
Response Filed
Jan 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548022
SYSTEMS AND METHODS FOR EXECUTING REAL-TIME ELECTRONIC TRANSACTIONS USING API CALLS
2y 5m to grant Granted Feb 10, 2026
Patent 12518276
SYSTEMS AND METHODS FOR SECURE TRANSACTION REVERSAL
2y 5m to grant Granted Jan 06, 2026
Patent 12511637
METHOD, APPARATUS, AND DEVICE FOR ACCESSING AGGREGATION CODE PAYMENT PAGE, AND MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12489647
SECURELY PROCESSING A CONTINGENT ACTION TOKEN
2y 5m to grant Granted Dec 02, 2025
Patent 12481992
AUTHENTICATING A TRANSACTION
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
39%
Grant Probability
72%
With Interview (+32.5%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month