Prosecution Insights
Last updated: April 19, 2026
Application No. 18/989,360

METHOD AND SYSTEM FOR COURSE ASSESSMENT IN A LEARNING MANAGEMENT SYSTEM

Final Rejection §101§103
Filed
Dec 20, 2024
Examiner
SIMPSON, DIONE N
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
D2L Corporation
OA Round
2 (Final)
34%
Grant Probability
At Risk
3-4
OA Rounds
3y 4m
To Grant
68%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
81 granted / 242 resolved
-18.5% vs TC avg
Strong +35% interview lift
Without
With
+35.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
60 currently pending
Career history
302
Total Applications
across all art units

Statute-Specific Performance

§101
40.9%
+0.9% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 242 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1, 5, and 8 are amended. Claims 2 is canceled. Claims 9-13 are added as new claims. Claims 1 and 3-13 are pending. Response to Arguments Applicant’s arguments, see pg. 7, filed 02/02/2026, with respect to the objection the Specification have been fully considered and are persuasive. The objection of the Specification has been withdrawn. Applicant’s arguments, see pg. 7, filed 02/02/2026, with respect to the claim objection have been fully considered and are persuasive. The claim objection has been withdrawn. Applicant’s arguments, see pg. 8, filed 02/02/2026, with respect to 35 U.S.C. 112(a) and subsequent 35 U.S.C. 112(b) have been fully considered and are persuasive. The 35 U.S.C. 112(a) and 35 U.S.C. 112(b) rejection has been withdrawn. Applicant's arguments filed 02/02/2026 regarding 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant argues that the claims are not directed to an abstract idea because the claims go beyond the abstract idea and does not only recite matter that falls within the enumerated groupings of abstract ideas. Examiner disagrees. It appears that applicant concedes that the claims recite an abstract idea as the argument indicates that the claims just do not only recite an abstract idea. Under Step 2A Prong One, the evaluation is whether an abstract idea is set forth or described in the claim. Regarding applicant’s independent claims for example, the invention and claim limitations are drawn towards course assessment by multiple assessors in a learning management system, and the claim limitations directly corresponds to certain methods of organizing human activity (managing personal behavior, interactions, relationships; following rules or instructions) as evidenced by limitations relating to assessing learners in a learning and/or educational environment: reviewing assessments for issues, returning assessments for additional review when issues are found, providing the assessment to each learner. The claims also correspond to mental processes (observation, evaluation, judgment, opinion), as evidence by limitations detailing the evaluation or observation of learners and making a judgment/opinion based on the evaluation or observation; reviewing assessment s for issues and when issues are found, returning the assessments for additional review; aggregating the assessments for each learner. The claims recite an abstract idea. Further, The Federal Circuit has explained that "the 'directed to' inquiry applies a stage-one filter to claims, considered in light of the specification, based on whether 'their character as a whole is directed to excluded subject matter."' Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016) (quoting Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1346 (Fed. Cir. 2015)). It asks whether the focus of the claims is on a specific improvement in relevant technology or on a process that itself qualifies as an "abstract idea" for which computers are invoked merely as a tool. Here, it is clear from the Specification (including the claim language) that claims 1 and 8 focus on an abstract idea, and not on an improvement to technology and/or a technical field. In addition to the claim limitations identified that correspond to the judicial exception, applicant’s specification recites: [0002] Learning management systems (“LMS”) are becoming more popular for delivery of educational material in many different situations, whether in conventional areas like public/private educational institutions all the way through to corporations providing internal training to their employees. Some LMSs merely track student registration and progress while others deliver course content and materials directly to students. [0003] With the rapid increase of LMSs and the organizations that use them and provide educational content, there is also an increase in the number of learners/students that may be taking a particular course. In some cases, the number of learners may be in the tens of thousands. In such large classes, it can be difficult for an instructor to provide an assessment for each learner in an efficient, fair and effective manner. While instructors have traditionally used teaching assistants, multiple choice (computer graded) testing, and the like, these techniques can have problems in relation to consistency, true assessment of capability, and the like. [0004] In other situations, generally when a class is smaller, there may be multiple assessors assigned so that differing viewpoints or perspectives can be provided to an individual learner. In this situation, there can sometimes be a conflict between/among any feedback that each assessor is providing. As such, there is a need for an improved system and method for course assessment by multiple assessors in a learning management system. The specification makes it clear that the alleged improvement is an improvement in the judicial exception itself (certain methods of organizing human activity and mental processes) and not an improvement in computers or technology. It is important to keep in mind that an improvement in the judicial exception itself (e.g., a recited fundamental economic concept) is not an improvement in technology (emphasis added). For example, in Trading Technologies Int’l v. IBG LLC, the court determined that the claim simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology. Similarly, the Applicant’s claim recitations are an improvement in the judicial exception, not an improvement in technology. Examiner rejects applicant’s assertion that the claim limitations cannot be performed in the human mind when the entirety of the claims described the observation and evaluation of data (assessments). Additionally, claims can recite a mental process even if they are claimed as being performed on a computer. If the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept, the claim is considered to recite a mental process. This is the case in the applicant’s invention. For instance, claim 8 recites generic computing components used to perform the analyzing and observation of the data or assessments. Applicant’s argument that implementing the elements in a conventional system would not be possible in any practical way without the teachings of the present invention is unpersuasive. Applicant’s specification along with the prior art on record indicates that the limitations would be able to be implemented, and the use of a computer system to automate the steps merely results in user convenience or efficiency. "Claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept”. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015); see also MPEP ¶2106.05(f). Applicant further argues under Step 2B that the claims amount to significantly more than the judicial exception because their claims are a technology rooted solution addressed by a novel approach. Examiner disagrees. Examiner notes that The court in DDR Holdings observed that the “claimed solution [was] necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks.” DDR Holdings, 773 F.3d at 1259. The claims in DDR Holdings addressed the problem of retaining website visitors, that if adhering to the routine, conventional functioning of the Internet hyperlink protocol, would be instantly transported away from a host’s website after “clicking” on an advertisement and activating a hyperlink. The invention and claims of DDR Holdings were deemed patent eligible because, regardless of what abstract idea it may have been directed towards, it nonetheless represented a solution “necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks” – it was not deemed patent eligible merely because it recited a computer-based solution in a particular field of industry. Applicant’s claims are not similar to that of DDR Holdings. Applicant’s claims at best recites a computer-based solution in a particular field of industry. Applicant’s argument relating to novelty is unpersuasive and does not fit in this analysis. The search for an inventive concept should not be confused with a novelty or non-obviousness determination. As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter." Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1315, 120 USPQ2d 1353, 1358 (Fed. Cir. 2016). Specifically, lack of novelty under 35 U.S.C. 102 or obviousness under 35 U.S.C. 103 of a claimed invention does not necessarily indicate that additional elements are well-understood, routine, conventional elements. Because they are separate and distinct requirements from eligibility, patentability of the claimed invention under 35 U.S.C. 102 and 103 with respect to the prior art is neither required for, nor a guarantee of, patent eligibility under 35 U.S.C. 101 (see also MPEP §2106.05). The 35 U.S.C. 101 rejection is maintained. Applicant’s arguments with respect to 35 U.S.C. 101 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner also notes that applicant’s argument with respect to the Luca reference is unpersuasive. The claims are broadly written and under broadest reasonable interpretation, Luca reads on the claim set. Further, Luca’s consistency checks is also conducted to ensure alignments among learning goals, learning expectations, learning assessment forms or rubrics, learning input or delivery, assignments, assessments, learning indexes, and the like, as indicated in the cited portions of Luca. Luca also discloses monitoring learning to identify issues as they occur. In response to applicant's argument that Miller is nonanalogous art, it has been held that a prior art reference must either be in the field of the inventor’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, Miller is in the field of the inventor’s endeavor and reasonably pertinent to the particular problem with which the inventor is concerned since the system of Miller also analyzes and converts said performance assessment data. Further, Miller reads on the claims as drafted. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 3-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claims 1 and 3-7 recite a method (i.e. process), and claims 8-13 recite a system (i.e. machine). Therefore claims 1 and 3-13 fall within one of the four statutory categories of invention. Independent claims 1 and 8 recite the limitations of: receiving assessments from a plurality of assessors for a plurality of learners; reviewing the assessments for any issues in the assessments themselves; when issues are found, returning the assessments for additional review prior to continuing the method; aggregating the assessments to provide an aggregated assessment for each learner of the plurality of learners; reviewing the aggregated assessment for any aggregated issues in the aggregated assessment itself; when aggregated issues are found, returning the aggregated assessments for additional review, prior to continuing the method; and providing the aggregated assessment to each learner of the plurality of learners. The invention and claim limitations are drawn towards course assessment by multiple assessors in a learning management system, and the claim limitations directly corresponds to certain methods of organizing human activity (managing personal behavior, interactions, relationships; following rules or instructions) as evidenced by limitations relating to assessing learners in a learning and/or educational environment: reviewing assessments for issues, returning assessments for additional review when issues are found, providing the assessment to each learner. The claims also correspond to mental processes (observation, evaluation, judgment, opinion), as evidence by limitations detailing the evaluation or observation of learners and making a judgment/opinion based on the evaluation or observation; reviewing assessment s for issues and when issues are found, returning the assessments for additional review; aggregating the assessments for each learner. The claims recite an abstract idea. The judicial exception is not integrated into a practical application simply because the claims recite the additional elements of: a learning management system, a processor (claim 8), and memory (claim 8). The additional elements are computer components recited at a high-level of generality performing the above-mentioned limitations. The combination of the additional elements are no more than mere instructions to apply the judicial exception using a generic computer. Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Also note that claim 1 presents no additional elements to consider. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using a generic computer. Mere instructions to apply an exception using a generic computer cannot provide an inventive concept. Thus, when viewed as an ordered combination, nothing in the claims add significantly more (i.e. an inventive concept) to the abstract idea. The claims are not patent eligible. Dependent claims 6 and 12 recite the limitation that the automatically combining comprises submitting the document to an [AI agent] for automatically processing into the single assessment. The claim is further directed to the abstract idea analyzed above. The claim also recites the additional element of an AI agent. The additional element amounts to “apply it” or merely using a computer as a tool to implement the judicial exception, and generally linking the judicial exception to a particular field of use (course assessment). Further, when viewed as an ordered combination, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. The claim is not patent eligible. Dependent claims 3-5, 7, 9-11, and 13 recite additional limitations that are further directed to the abstract idea analyzed in the rejected claims above. The claims also recite additional elements that have been analyzed in the rejected claims above. Thus, claims 3-5, 7, 9-11, and 13 are also rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 3-5, and 8-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Luca (2018/0130154) in view of Sirhani (2023/0344852). Claim 1: Luca discloses: A method for course assessment in a learning management system, the method comprising: receiving assessments from a plurality of assessors for a plurality of learners; (Luca ¶0008 disclosing receiving learning assessment data from a plurality of learning assessors; ¶0010 disclosing learning indexes are generated for a plurality of individual learners) aggregating the assessments to provide an aggregated assessment for each learner of the plurality of learners; (Luca ¶0055 disclosing learning indexes are first calculated at the level individual of the learning output unit; they can be calculated at all configurations afterwards by “rolling up” or aggregating learning index data; ¶0092 one or more objective learning assessment results may be combined into a plurality of learning indexes; ¶0094 disclosing providing and using objective learning assessment criteria, assessing learning outcomes based on learning goals and or learning expectations and aggregating the results; ¶0124 learning indexes may be aggregated and compounded at any desired configurations, using weights, formulas and/or algorithms, and may be calculated per grading unit, per multiple unit of learner across multiple levels and units of learning, or per multiple units of learner across multiple levels and units of learning (or for any combination of these); learning indexes may comprise totals (absolute amount) of learning achieved or accomplished, or percentages achieved, and as grand totals, as well as measures of missed learning (gaps); ¶0128 the data obtained including assessor assessments inputs at the level of individual learning output, and then aggregate learning indexes may be computed and added) providing the aggregated assessment to each learner of the plurality of learners. (Luca ¶0092 disclosing the individual output learning indexes per established learning goals categories are calculated by the system; the system performs calculations based on formulae to compound, aggregate, weight learning indexes; one or more learning outcome reports may be generated; learning outcomes may be processed automatically in order to provide feedback to one or more learning stakeholders, e.g., reports might be sent to students; ¶0127 disclosing learning assessment reports may be delivered to learners and or to groups of learners) Luca in view of Sirhani discloses: reviewing the assessments for any issues in the assessments themselves; when issues are found, returning the assessments for additional review, prior to continuing the method; Luca discloses reviewing the assessments for any issues in the assessments themselves, and when issues are found, taking corrective actions prior to continuing the method: (Luca ¶0085 carrying out assessments or evaluations of learning output, using assessment forms, records, rubrics, and the like, calculating individual output level learning indexes, etc.; monitoring learning to identify issues as they occur; performing consistency checks to ensure that goals and expectations are in alignment; ¶0121 disclosing analysis engine may perform textual analysis of a learner's output to identify spelling and grammar errors, etc.; consistency checks may be performed in step 1040; consistency checks among learning assessment forms or rubrics and learning goals and learning expectations may be automatically conducted by or at the request of learning stakeholders, or learning agencies and agents; ¶0119 also discloses that when consistency checks fail, corrective steps may be taken as in step 760, and the process may loop back to step 810 or another step, depending on the nature and extent of consistency check failure; ¶0124 disclosing consistency checks may be performed in step 1150, and corrective actions may be taken as required by returning to affected prior steps to correct deficiencies in data consistency; consistency checks can be conducted to ensure alignments among learning goals, learning expectations, learning assessment forms or rubrics, learning input or delivery, assignments, assessments, learning indexes, and the like, by learning stakeholders, learning agencies and agents). While it is strongly implied that Luca returns the assessments for additional review, since Luca discloses the process may loop back to previous steps when corrective steps are taken, the limitation of returning the assessments for additional review is not explicitly disclosed. Sirhani suggests or discloses this limitation/concept: (Sirhani ¶0048 assessment platform is further configured (e.g., by code) to receive the completed questionnaires and corresponding uploaded artifacts from the SME and send them to corresponding members of the core team for review; core team validates and downloads the submitted artifacts and completed questionnaires from the SMEs; invalid questionnaires (e.g., incomplete, unsubstantiated, and the like) and invalid artifacts are returned to the corresponding SMEs for further processing; the assessment platform is further configured to receive the invalid questionnaire responses and artifacts from their corresponding core team members, and send them back to their correspond SMEs for further processing; this process is repeated until the core team receives and validates all of the completed questionnaires and corresponding artifacts). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Luca to include reviewing the assessments for any issues in the assessments themselves; when issues are found, returning the assessments for additional review, prior to continuing the method as taught by Sirhani since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately; one of ordinary skill in the art would have recognized that the results of the combination were predictable. reviewing the aggregated assessment for any aggregated issues in the aggregated assessment itself; when aggregated issues are found, returning the aggregated assessments for additional review, prior to continuing the method; and Luca discloses reviewing the aggregated assessment for any aggregated issues in the aggregated assessment, and when issues are found, taking corrective actions prior to continuing the method: (Luca ¶0085 carrying out assessments or evaluations of learning output, using assessment forms, records, rubrics,… output level learning indexes; monitoring learning to identify issues as they occur; performing consistency checks to ensure that goals and expectations are in alignment; ¶0121 disclosing analysis engine may perform textual analysis of a learner's output to identify spelling and grammar errors, etc.; consistency checks may be performed in step 1040; consistency checks among learning assessment forms or rubrics and learning goals and learning expectations may be automatically conducted by or at the request of learning stakeholders, or learning agencies and agents; ¶0124 disclosing consistency checks may be performed in step 1150, and corrective actions may be taken as required by returning to affected prior steps to correct deficiencies in data consistency; consistency checks can be conducted to ensure alignments among learning goals, learning expectations, learning assessment forms or rubrics, learning input or delivery, assignments, assessments, learning indexes, and the like, by learning stakeholders, learning agencies and agents; ¶0119 also discloses that when consistency checks fail, corrective steps may be taken as in step 760, and the process may loop back to step 810 or another step, depending on the nature and extent of consistency check failure; ¶0094 providing and using objective learning assessment criteria, assessing learning outcomes based on learning goals and or learning expectations and aggregating the results, and then reporting on and analyzing the results). While it is strongly implied that Luca returns the assessments for additional review, since Luca discloses the process may loop back to previous steps when corrective steps are taken, the limitation of returning the aggregated assessments for additional review is not explicitly disclosed. Sirhani suggests or discloses this limitation/concept: (Sirhani ¶0048 assessment platform is further configured (e.g., by code) to receive the completed questionnaires and corresponding uploaded artifacts from the SME and send them to corresponding members of the core team for review; core team validates and downloads the submitted artifacts and completed questionnaires from the SMEs; invalid questionnaires (e.g., incomplete, unsubstantiated, and the like) and invalid artifacts are returned to the corresponding SMEs for further processing; the assessment platform is further configured to receive the invalid questionnaire responses and artifacts from their corresponding core team members, and send them back to their correspond SMEs for further processing; this process is repeated until the core team receives and validates all of the completed questionnaires and corresponding artifacts). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Luca to include reviewing the aggregated assessment for any aggregated issues in the aggregated assessment itself; when aggregated issues are found, returning the aggregated assessments for additional review, prior to continuing the method as taught by Sirhani since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately; one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 8: Claim 8 is directed to a system. Claim 8 recites limitations that are parallel in nature as those addressed above for claim 1, which is directed towards a method. Claim 8 is therefore rejected for the same reasons as set forth above for claim 1. Furthermore, claim 8 recites: (Claim 8): A system for course assessment in a learning management system, the learning management system including a processor and a memory containing computer readable instructions that when executed on the processor cause the learning management system to implement the system for course assessment to: (Luca ¶0070 disclosing CPU may include one or more processors such as, for example, a processor from one of the Intel, ARM, Qualcomm, and AMD families of microprocessors; electrically erasable programmable read-only memories (EEPROMs), field-programmable gate arrays (FPGAs), and so forth, for controlling operations of computing device 100; in a specific embodiment, a local memory 101 (such as non-volatile random access memory (RAM) and/or read-only memory (ROM), including for example one or more levels of cached memory) may also form part of CPU; memory may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, and the like) Claim 3: A method according to claim 1, wherein the issues or aggregated issues comprise one or more of: completeness, conflicting results, and unusual wording. (Luca ¶0085 disclosing identifying issues including gaps in learning, missed learning at all levels; ¶0117 disclosing analyzing goals satisfied based on completion; ¶0121 analysis engine 631 may perform textual analysis of a learner's output to identify spelling and grammar errors and to quantitatively assess certain aspects of the selected output, e.g., deviation of writing style or substance from statistical patterns, etc.; ¶0139 analyzing written learning output for spelling, grammar, factual, and or stylistic errors; quantitative assessment of textual learning output to determine text-specific indexes, e.g., repetitive use of one or more words in close proximity to each other, etc.) Claim 9: Claim 9 is directed to a system. Claim 9 recites limitations that are parallel in nature as those addressed above for claim 3, which is directed towards a method. Claim 9 is therefore rejected for the same reasons as set forth above for claim 3. Claim 4: A method according to claim 1, wherein the aggregating the assessments comprises: for language-based assessments: copying each of the language-based assessments into a document; and combining the assessments into a single assessment, and (Luca ¶0121 disclosing analysis engine may perform textual analysis of a learner's output to identify spelling and grammar errors and to quantitatively assess certain aspects of the selected output, e.g., deviation of writing style or substance from statistical patterns, etc.; once an assessment has been conducted with automated support, in step 1022 assessment forms (records, templates, rubrics) at the output level are made available; see also ¶0127 disclosing the assessment reports) for grade-based assessments: applying a formula to the grade-based assessments to obtain a single assessment. (Luca ¶0010 the learning indexes are used to generate grade reports; ¶0092 once all these individual output learning indexes per established learning goals categories are calculated by the system (after one or more assessors selects values and enters them in the system), the system performs calculations based on formulae to compound, aggregate, weight learning indexes at all configurations, showing achieved learning or and missed learning at those configurations (or adds up and weighs learning indexes at other configurations; calculations may readily obtain learning indexes of all learning goal categories as well as overall ones per unit; one or more objective learning assessment results may be combined into a plurality of learning indexes; an example of a learning index is an overall grade for a class, which would be generated by some mathematical combination of particular grades achieved on specific assignments, tests, and projects; ¶0055 disclosing the learning indexes being numeric measures of learning that quantify learning outcomes and can be calculated at all configurations afterwards by “rolling up” or aggregating learning index data starting with raw data at the level of learning outputs and then working up one or more hierarchies, using weighting factors or other formulae that define how aggregation is to be carried out; ¶0087 disclosing conducting the objective learning assessment; various learning goals and their components, such as subgoals, are assigned one or more weights that are used in turn when assessing overall learning achievement; goal units and subunits are assigned weights; criteria show requirements for learners to demonstrate learning; criteria include items and scenarios of learning, numerical values (such as percentages, weights, whole numbers, etc.); scenarios of learning (for example, “identify 3 theories 100%, 2 theories 75%, translating into a B+ per category”), of meeting categories of learning goals are developed (for example, only 2 theories identified, meaning 70% of breadth/general knowledge), which can be expressed in various units or ways (for example, “all or nothing”, “% of all”, X % of analytical, and so forth; numeric values are assigned to goals at levels and units of learning, to goal categories, and scenarios of learning; numeric values may include any of ideal totals, absolute values, and percentages; weights of goal category may vary, for example 10% for “research”, 60% for “breadth”, and so forth); see also ¶0117, ¶0119,¶0123, ¶0137) Claim 10: Claim 10 is directed to a system. Claim 10 recites limitations that are parallel in nature as those addressed above for claim 4, which is directed towards a method. Claim 10 is therefore rejected for the same reasons as set forth above for claim 4. Claim 5: A method according to claim 4, wherein the formula comprises a weighted formula wherein the weighting is based on contact with the learner. (Luca ¶0092 once all these individual output learning indexes per established learning goals categories are calculated by the system (after one or more assessors selects values and enters them in the system), the system performs calculations based on formulae to compound, aggregate, weight learning indexes at all configurations, showing achieved learning or and missed learning at those configurations (or adds up and weighs learning indexes at other configurations; calculations may readily obtain learning indexes of all learning goal categories as well as overall ones per unit; one or more objective learning assessment results may be combined into a plurality of learning indexes; an example of a learning index is an overall grade for a class, which would be generated by some mathematical combination of particular grades achieved on specific assignments, tests, and projects; ¶0055 disclosing the learning indexes being numeric measures of learning that quantify learning outcomes and can be calculated at all configurations afterwards by “rolling up” or aggregating learning index data starting with raw data at the level of learning outputs and then working up one or more hierarchies, using weighting factors or other formulae that define how aggregation is to be carried out; ¶0087 disclosing conducting the objective learning assessment; various learning goals and their components, such as subgoals, are assigned one or more weights that are used in turn when assessing overall learning achievement; goal units and subunits are assigned weights; criteria show requirements for learners to demonstrate learning; criteria include items and scenarios of learning, numerical values (such as percentages, weights, whole numbers, etc.); scenarios of learning (for example, “identify 3 theories 100%, 2 theories 75%, translating into a B+ per category”), of meeting categories of learning goals are developed (for example, only 2 theories identified, meaning 70% of breadth/general knowledge), which can be expressed in various units or ways (for example, “all or nothing”, “% of all”, X % of analytical, and so forth; numeric values are assigned to goals at levels and units of learning, to goal categories, and scenarios of learning; numeric values may include any of ideal totals, absolute values, and percentages; weights of goal category may vary, for example 10% for “research”, 60% for “breadth”, and so forth); ¶0117 disclosing a formula might combine various assignment completion data points, exam and quiz scores, and class participation scores to arrive at a quantitative level that characterizes whether a certain goal is met or not; see also ¶0119,¶0123, ¶0137) Claim 11: Claim 11 is directed to a system. Claim 11 recites limitations that are parallel in nature as those addressed above for claim 5, which is directed towards a method. Claim 11 is therefore rejected for the same reasons as set forth above for claim 5. Claim(s) 6, 7, 12, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Luca (2018/0130154) in view of Sirhani (2023/0344852) further in view of Miller (20200302296). Claim 6: A method according to claim 4, wherein the automatically combining comprises submitting the document to an AI agent for automatically processing into the single assessment. Luca discloses combining the assessments into a single assessment but does not explicitly disclose that the automatically combining comprises submitting the document to an AI agent for automatically processing into the single assessment. Miller suggests or discloses this limitation/concept: (Miller ¶0068 disclosing medical school milestones and checkpoints (e.g. course load, assessments & evaluations, graduation pre-requisites); such standardized data can be used in connection with AI and machine learning (ML) techniques to provide actionable data relating to learners' near and long-term success; ¶0069 disclosing the aggregates large data sets of student assessments (e.g. tests) and professionalism evaluations (e.g. recommendations or evaluations) are transformed into easily comprehensible visualizations (e.g. clusters or heatmaps) that highlight the predictive outcome of learners based on historical data; ¶0113 the evaluation module identifies one or more potential or possible combinations of source knowledge and knowledge evaluation (e.g. exams plus homework vs quizzes and open learning) likely to result in individual proficiency; the described evaluation module configures the processor to identify within evaluative sets (e.g. exams) which questions are highly correlated with proficiency in a given subject area, e.g., a machine learning algorithm is implemented by one or more submodules to extract data from the data set or to classify the data of the dataset into one or more categories). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Luca in view of Sirhani to include that the automatically combining comprises submitting the document to an AI agent for automatically processing into the single assessment as taught by Miller. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Luca in view of Sirhani in order to process large data sets faster and more efficient (see ¶0010 of Miller). Claim 12: Claim 12 is directed to a system. Claim 12 recites limitations that are parallel in nature as those addressed above for claim 6, which is directed towards a method. Claim 12 is therefore rejected for the same reasons as set forth above for claim 6. Claim 7: A method according to claim 1, wherein the additional review of issues or aggregated issues comprises: review of one or more assignments that lead to the assessments by an AI agent; assessment by the AI agent; and aggregation of the AI agent assessment with the assessments. Luca discloses combining the assessments into a single assessment but does not explicitly disclose that the additional review of issues or aggregated issues comprises: review of one or more assignments that lead to the assessments by an AI agent; assessment by the AI agent; and aggregation of the AI agent assessment with the assessments. Miller suggests or discloses this limitation/concept: (Miller ¶0066 utilize access to a large cohort of skilled, validated medical and AI based evaluation models to provide customized training and feedback to the students and educators. For instance, the AI based evaluation modules are used to generate classifiers that receive structured and unstructured data relating to a specific learner and classify the probability that the learner fits into one or more educational cohorts. Additionally, the AI based evaluation modules can be used to review and interpret the likelihood that a given selection of evaluative materials (e.g. tests) are accurate predictors or future academic or career success; ¶0068 disclosing medical school milestones and checkpoints (e.g. course load, assessments & evaluations, graduation pre-requisites); such standardized data can be used in connection with AI and machine learning (ML) techniques to provide actionable data relating to learners' near and long-term success; ¶0069 disclosing the aggregates large data sets of student assessments (e.g. tests) and professionalism evaluations (e.g. recommendations or evaluations) are transformed into easily comprehensible visualizations (e.g. clusters or heatmaps) that highlight the predictive outcome of learners based on historical data; ¶0113 the evaluation module identifies one or more potential or possible combinations of source knowledge and knowledge evaluation (e.g. exams plus homework vs quizzes and open learning) likely to result in individual proficiency; the described evaluation module configures the processor to identify within evaluative sets (e.g. exams see also ¶0119, ¶0127 student's performance and test response psychometrics are computed and transformed by AI predictive analytics into a ‘Pre-test Confidence Index) which questions are highly correlated with proficiency in a given subject area, e.g., a machine learning algorithm is implemented by one or more submodules to extract data from the data set or to classify the data of the dataset into one or more categories). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Luca in view of Sirhani to include that the additional review of issues or aggregated issues comprises: review of one or more assignments that lead to the assessments by an AI agent; assessment by the AI agent; and aggregation of the AI agent assessment with the assessments as taught by Miller. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Luca in view of Sirhani in order to process large data sets faster and more efficient (see ¶0010 of Miller). Claim 13: Claim 13 is directed to a system. Claim 13 recites limitations that are parallel in nature as those addressed above for claim 7, which is directed towards a method. Claim 13 is therefore rejected for the same reasons as set forth above for claim 7. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIONE N SIMPSON whose telephone number is (571)272-5513. The examiner can normally be reached M-F; 7:30 a.m.-4:30 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon Campbell can be reached at 571-272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DIONE N. SIMPSON Primary Examiner Art Unit 3628 /DIONE N. SIMPSON/Primary Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Sep 26, 2025
Non-Final Rejection — §101, §103
Feb 02, 2026
Response Filed
Mar 23, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596987
Connected Logistics Receptacle Apparatus, Systems, and Methods with Proactive Unlocking Functionality Related to a Dispatched Logistics Operation by a Mobile Logistics Asset Having an Associated Mobile Transceiver
2y 5m to grant Granted Apr 07, 2026
Patent 12579484
INTELLIGENTLY CUSTOMIZING A CANCELLATION NOTICE FOR CANCELLATION OF A TRANSPORTATION REQUEST BASED ON TRANSPORTATION FEATURES
2y 5m to grant Granted Mar 17, 2026
Patent 12561692
UPDATING ACCOUNT INFORMATION USING VIRTUAL IDENTIFICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12391138
ELECTRIC VEHICLE, AND CHARGING AND DISCHARGING FACILITY, AND SYSTEM
2y 5m to grant Granted Aug 19, 2025
Patent 12387163
Logistical Management System
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
34%
Grant Probability
68%
With Interview (+35.0%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 242 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month