Prosecution Insights
Last updated: April 19, 2026
Application No. 18/463,145

APPARATUS AND METHOD OF ASSESSING INSTRUCTOR RATINGS ON A DEFINED RATING SCALE FOR SKEWNESS

Final Rejection §101§103
Filed
Sep 07, 2023
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Boeing Company
OA Round
3 (Final)
35%
Grant Probability
At Risk
4-5
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office action. In response to Examiner’s Non-Final Rejection of 10/7/25, Applicant, on 1/7/25, amended claims. Claims 1-20 are pending in this application and have been rejected below. Response to Amendment Applicant’s amendments are acknowledged. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 1 is directed to an apparatus which is a statutory category. Step 2A, Prong One - MPEP 2106.04 - The claim 1 recites– “An apparatus for assessing instructor ratings, comprising: … receive a raw instructor rating data set, the raw instructor rating data set comprising a plurality of ratings from an individual instructor, each one of the plurality of rating corresponding to a numerical value on a defined rating scale; determine a rating skewness of the plurality of ratings by subtracting a second central tendency measure of the plurality of ratings from a first central tendency measure of the defined rating scale to generate a result, and dividing the result by a standard deviation of the plurality of ratings; compare the skewness of the plurality of ratings to at least one comparative rating skewness to generate a skewness comparison report; determine if the individual instructor requires instructor training based on the skewness comparison report; if determined that the individual instructor requires instructor training, identify a required instructor training to provide to the individual instructor; and initiate at least one … action selected from: notifying the individual instructor that instructor training is required; scheduling the individual instructor for the required instructor training; notifying the individual instructor that the required instructor training was scheduled; or transmitting the rating skewness, the skewness comparison report, and the required instructor training information to one or more end users; and following verification … that the individual instructor has completed a required duration of the required instructor training determined based on the rating skewness, … update instructor training status information stored … to indicate completion of the required instructor training” As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (e.g. managing interactions between people (social activities, teaching) and “mathematical relationships” as here we have ratings from instructors/teachers, and performing a series of mathematical operations – ratings corresponding to a numerical scale, determining a rating skewness by subtracted one tendency measure (average/mean/median/mode) from another, then dividing by standard deviation, comparing skewness [presumably from one instructor to a threshold/level of “comparative rating skewness”, as the specification indicates it can be a “baseline… reference point… of optimal rating skewness”], if instructor below the baseline level, “require” recommend they get further educational instruction, either notify the instructor of required training OR schedule the instructor for the training OR notify the instructor that the training was scheduled, OR making the mathematical rating skewness, the report, and the training available to the users; then tracking when instructor has completed a required duration of training, update their training status information to indicate completion.. Accordingly, claim 1 is directed to an abstract idea because it is doing a series of mathematical calculations and analysis steps to determine a rating taking into account skewness, and comparing instructors ratings to see who needs more training. Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements that are: “An apparatus for assessing instructor ratings, comprising: a processor; and a memory that stores code executable by the processor to: … Initiate at least one automated action selected from: Notifying OR Scheduling required training OR Notifying instructor required training was scheduled; OR transmitting the rating skewness, the skewness comparison report, and the required instructor training information to one or more end users; following verification by the apparatus that the individual instructor has completed a required duration of the required instructor training determined based on the rating skewness, automatically update instructor training status information stored in the memory to indicate completion of the required instructor training” (MPEP 2106.05f applies –“apply it [the abstract idea – certain methods of organizing human activity and math relationships] on a computer”; merely uses a computer, memory as a tool to perform an abstract idea; even with the second-to-last step of “automate” either scheduling, notifying, or transmitting information to users, this is still viewed under MPEP 2106.05f, where the computer is automating the abstract idea – here the abstract portion is “notifying” a user; the final step is just verifying “by a computer” that a person has completed the required training; for combination of “stored in memory” at end and “automated” sending notifications to user and computer is also viewed as MPEP 2106.05h (Field of use)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computing system, is treated as MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. In addition, at step 2B, “transmitting” reports is also considered a conventional computer function (See MPEP 2106.05d(II) - “Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321”); and the final step of “training status stored in memory” is also considered a conventional computer function (See MPEP 2106.05d(II) - Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334.) Independent claim 17 is a statutory category at step one (method). The remaining limitations are similar to claim 1, and are rejected for the same reasons at step 2a, prong one; step 2a, prong two, and step 2B. Examiner further notes that method claim 17 has contingent limitations; the fourth limitation is “if” instructor requires training; thus, many of the limitations are not required for the method claim. Independent claim 20 is a statutory category at step one (article of manufacture). The beginning of the claim 20 recites: “A program product for assessing instructor ratings comprising a non-transitory computer readable storage medium storing code, the code being configured to be executable by a processor to perform operations comprising”. These are additional elements and are treated at step 2a, prong two and step 2B similar to the computer, memory and code in claim 1. The remaining limitations are similar to claim 1 and are rejected for the same reasons at step 2a, prong one; step 2a, prong two, and step 2B. Claims 2-9, 18-19 narrow the abstract idea by having further mathematical relationships including specifying mean/median/mode, using a reference value, a period for evaluating an instructor, looking at magnitudes when comparing, and what kind of scale is used. Claims 10 narrows the abstract idea by stating the title of the instructor and what they teach a person (naming the data), and indicating the ratings relate to “flight” instructions. Claim 11 narrows the abstract idea by stating the training “made accessible” in claim 1 is “aviation-specific teaching methodologies and flight simulation techniques.” This is viewed as explaining to a person how to use the flight simulator and further narrows the abstract idea of - certain methods of organizing human activity” (e.g. managing interactions between people (social activities, teaching). Claim 12 has additional elements of “memory further stores code executable by the processor to receive additional data sources; and the additional data sources are made accessible to one or more end user.” These are viewed as “apply it [abstract idea] on a computer” (MPEP 2106.05f) at step 2a, prong two and step 2B. At step 2B, this is also a conventional computer function – See MPEP 2106.05d(II) - iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334. Claim 13 narrows the abstract idea by plotting the mathematical ratings into a graph or visual representation. Claim 14 narrows the abstract idea – but is explicitly for at least two instructors, similar to claim 1. Claim 15-16 narrows the abstract idea by stating there are a plurality of evaluation periods, and then further tracking changes between periods, and adjusting recommended training for instructors based on changes, and also identifying trends and then recommendations based on the trends. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information on 101 rejections, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 7-14, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Delisle (WO 2024/031182 – as also evidenced by priority document 63/370,671) in view of Banditwattanawong, et al, "Norm-referenced achievement grading of normal, skewed, and imperfectly normal distributions based on machine learning versus statistical techniques," 2020 IEEE Conference on Computer Applications (ICCA), pages 1-8, and Nemeth (US 2008/0286727). Concerning claim 1, Delisle discloses (Examiner notes the same paragraph numbers in publication of Delisle correspond to the citations in the provisional ‘671): An apparatus for assessing instructor ratings (Delisle – see par 27 (in ‘182 or provisional ‘671) - the Al module communicates with the instructor operating station (IOS) 1600 to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model; In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks.), comprising: a processor (Delisle – see par 30 (in ‘182 or provisional ‘671) - Each server 141 has a server processor or CPU 142, a memory 144, a data communication device 146 and may also include an input/output device 148. The); and a memory that stores code executable by the processor (Delisle – see par 75 (in ‘182 or provisional ‘671) - These methods can be implemented in hardware, software, firmware or as any suitable combination thereof. That is, if implemented as software, the computer-readable medium comprises instructions in code which when loaded into memory and executed on a processor of a computing device causes the computing device to perform any of the foregoing method steps.) to: receive a raw instructor rating data set, the raw instructor rating data set comprising a plurality of ratings from an individual instructor, each one of the plurality of rating corresponding to a numerical value on a defined rating scale (Delisle – see par 22 (in ‘182 or provisional ‘671) - In the embodiment depicted in FIG. 1 , the system 100 includes an instructor operating station (IOS) 1600 communicatively connected to the interactive computer simulation station 1100 to receive instructor assessment data from an instructor at the IOS 1600, which are stored in an instructor assessment data storage 126. The instructor data may be input manually by the instructor via an instructor computing device during the simulation. The instructor may grade performance using any suitable grading, marking or evaluation scheme or methodology.); determine a rating skewness of the plurality of ratings by … a first central tendency measure of the defined rating scale to … a standard deviation of the plurality of ratings Delisle – see par 27 (in ‘182 or provisional ‘671) - display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks. For example, the grading feedback to the instructor may indicate if the instructor is an outlier in grading a particular flight maneuver and therefore should recalibrate the subjective evaluation of that particular flight maneuver to better align with other instructor evaluations of that same maneuver and/or the automatic assessments of that same flight maneuver.). Delisle discloses assessing instructor as being too lax or strict, and should recalibrate to better align with other instructor evaluations (See par 27) and having performance assessment module 162 that includes performance history data for students and metrics of an “average population of students” (See par 55 in ‘182 or provisional ‘671). Banditwattanawong discloses: determine a “rating skewness of the plurality of ratings by subtracting a second central tendency measure of the plurality of ratings from a first central tendency measure of the defined rating scale to generate a result, and dividing the result by a standard deviation of the plurality of ratings” ([0051] as published states “As used herein, the central tendency measure is a statistical measurement that describes or represents the central or typical value of a dataset as a single value that summarizes the distribution of data points. Typically, the central tendency measure will be represented by the mean, median, or mode of the data set.”; [0052] “In other words, by shifting the first central tendency measure to the mean (or other central tendency measure) of the defined rating scale, the degree of asymmetry in the distribution of the ratings as a function of the defined rating scale can be determined Banditwattanawong discloses the limitations based on broadest reasonable interpretation in light of the specification – see page 2, Section 4 - Z score is a measure of how many standard deviations below or above the population mean a raw score is. Z score (z) is technically defined in (1) as the signed fractional number of standard deviation σ by which the value of an observation or a data point x is above the mean value µ of what is being observed or measured. PNG media_image1.png 52 284 media_image1.png Greyscale See page 3, section 6.1 – Evaluation, Normal Distribution - The ND data set has normal curve distribution, which renders a symmetric bell shape according to a PNG media_image2.png 40 240 media_image2.png Greyscale (Examiner notes both equations above “divided by standard deviation”) see page 4, section 6.2 – Positively Skewed Distribution - Positively skewed distribution is an asymmetric bell shape skewed to the left probably caused by overly difficult exam questions from the viewpoint of learners. Figure 2 depicts the normal distribution of the SD+ set. The skewness equals 1.006. See page 5, Section 6.3 – Negatively skewed distribution - Negatively skewed distribution is an asymmetric bell shape skewed to the right probably caused by overly easy exam questions from the viewpoint of learners. Figure 3 depicts the normal distribution of the SD- set. The skewness equals -1.078.). Delisle and Banditwattanawong with the Z score disclose: compare the rating skewness of the plurality of ratings to at least one comparative rating skewness to generate a skewness comparison report (Banditwattanawong - see page 1, col. 2, 2nd paragraph - contribution of this paper is the novel insight into heuristic, statistical, and machine learning methods comparatively applied to the unconditionally norm-referenced grading of various data distribution characteristics; see page 2, Section 4 - z scores are further converted to t scores to simplify interpretation because t scores normally range from 0 to 100 unlike z scores that can be negative real numbers. The t scores are then sorted and a range between maximum and minimum t scores is divided by the desired number of grades to obtain an identical score interval); determine if the individual instructor requires instructor training based on the skewness comparison report (Delisle - see par 27 - In one embodiment, the Al module communicates with the instructor operating station (IOS) 1600 to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks; see also Banditwattanawong – see page 1, col. 2, 2nd paragraph - contribution of this paper is the novel insight into heuristic, statistical, and machine learning methods comparatively applied to the unconditionally norm-referenced grading of various data distribution characteristics. The findings of this paper would help worldwide graders with the selection of right grading methods to meet their objectives well. see page 8, Section 7 - When grading the imperfectly-normal-distribution data sets, our heuristic method produces the best DBIs, K-means method has the moderate DBIs, and z score gives the worst DBIs. In overall, heuristic method outperforms the other methods. K-means method is ranked second. Z score is the worst.); if determined that the individual instructor requires instructor training, identify a required instructor training to provide to the individual instructor (Delisle – See par 27 - For example, the grading feedback to the instructor may indicate if the instructor is an outlier in grading a particular flight maneuver and therefore should recalibrate the subjective evaluation of that particular flight maneuver to better align with other instructor evaluations of that same maneuver and/or the automatic assessments of that same flight maneuver); initiate at least one automated action (Delisle - see par 27 (in ‘182 or provisional ‘671) - In one embodiment, the Al module communicates with the instructor operating station (IOS) 1600 to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks..; see par 72 - the method involves the Al module communicating with the instructor operating station to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model.) selected from: notifying the individual instructor that instructor training is required (Delisle – see par 27 - For example, the grading feedback to the instructor may indicate if the instructor is an outlier in grading a particular flight maneuver and therefore should recalibrate the subjective evaluation of that particular flight maneuver to better align with other instructor evaluations of that same maneuver and/or the automatic assessments of that same flight maneuver. Banditwattanawong - see page 1, col. 2, 2nd paragraph - contribution of this paper is the novel insight into heuristic, statistical, and machine learning methods comparatively applied to the unconditionally norm-referenced grading of various data distribution characteristics. The findings of this paper would help worldwide graders with the selection of right grading methods to meet their objectives well); scheduling the individual instructor for the required instructor training; notifying the individual instructor that the required instructor training was scheduled; or transmitting the rating skewness, the skewness comparison report, and the required instructor training information to one or more end users (Delisle - see par 27 - In one embodiment, the Al module communicates with the instructor operating station (IOS) 1600 to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks; see also Banditwattanawong – see page 1, col. 2, 2nd paragraph - contribution of this paper is the novel insight into heuristic, statistical, and machine learning methods comparatively applied to the unconditionally norm-referenced grading of various data distribution characteristics. The findings of this paper would help worldwide graders with the selection of right grading methods to meet their objectives well). Delisle discloses providing feedback for instructor to calibrate grading (See par 27 (in ‘182 or provisional 671). Banditwattanawong discloses calculating Z score for how data are above or mean value and having positive or negative skew on exam questions (See page 2, 4, 5) where norm-referenced grading is used to help worldwide grades with selection of right grading methods to meet their objectives (See page 1, col. 2, 2nd paragraph). Nemeth discloses: following verification by the apparatus that the individual instructor has completed a required duration of the required instructor training determined based on the rating skewness, automatically update instructor training status information stored in the memory to indicate completion of the required instructor training (Nemeth – par 8 - a training manager could review one or more evaluations by several evaluators and conclude that … that an individual instructor must receive further training so that his/her evaluations are consistent with other evaluations. par 10, FIG. 2 - Evaluation instructions can be provided to the evaluators indicating how to complete a correct assessment. These evaluation instructions can help managers of the instructors achieve accountability because instructors can be given information which helps them complete a fair, complete, and consistent assessment. Not only can the trainees be evaluated, but the instructors can also be evaluated to measure how well each of them individually, as well as collectively, perform against the expectations indicated in the written directives. Evaluation instructions can be provided to the evaluators indicating how to complete a correct assessment. These evaluation instructions can help managers of the instructors achieve accountability because instructors can be given information which helps them complete a fair, complete, and consistent assessment.; see par 15 - In 233, information representing the determined best way to implement the suggested change is completed. For example, it could be done by creating new courseware, memos, video, CBT, and/or revised manuals. In 234, the information representing the best way to implement the suggested change is sent to the training manager for review. In 236, the evaluators access and learn the new information through evaluation application 155.; see par 16 - My Learning Plan 650 could be accessed to know what trainers/evaluators need to accomplish. Training Coordinator 690 can be used by training coordinators to coordinate appropriate training. FIG. 7 is a screen shot illustrating a capability to rate evaluation reliability, according to one embodiment. In one embodiment, this screen shot can appear automatically based on a database that tracks which lessons should be made available to a unique user. ). Delisle, Banditwattanawong, and Nemeth are analogous art as they are directed to analyzing student grade distributions from teachers (see Delisle Abstract, par 27; Banditwattanawong Abstract; Nemeth – par 8, 11-12). 1) Delisle discloses detecting instructors who are too or strict in evaluating student performance (See par 27). Banditwattanawong improves upon Delisle by disclosing calculating a Z-score by subtracting scores relative to the “mean” (a central tendency measure) and then divided by the “standard deviation” of the ratings. (See page 2) or (see page 3 – Normal Distribution formula divides by standard deviation) and assessing positive or negative skew in grade distributions. One of ordinary skill in the art would be motivated to further include using a Z score or Normal distribution to efficiently improve upon the assessment of an instructor being too lax or too strict in Delisle. 2) Delisle discloses providing feedback for instructor to calibrate grading (See par 27 (in ‘182 or provisional 671). Banditwattanawong discloses calculating Z score for how data are above or mean value and having positive or negative skew on exam questions (See page 2, 4, 5) where norm-referenced grading is used to help worldwide graders with selection of right grading methods to meet their objectives (See page 1, col. 2, 2nd paragraph). Nemeth improves upon Delisle and Banditwattanawong by disclosing individuals must receive training so their evaluations are consistent, based on evaluators being outside the norm (based on standard deviation) (See par 8, 10) and where a database tracks lessons that trainers/evaluators need to accomplish (See par 10, 16). One of ordinary skill in the art would be motivated to further include giving additional instructions to evaluators who are les consistent or outside the norm and tracking lessons in a database to efficiently improve upon the assessment of an instructor being too lax or too strict in Delisle and to help worldwide graders as in Banditwattanawong. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructor evaluation as being too lax or too strict in Delisle (See par 27), to further assess grade distributions considering standard deviation and skewness as disclosed in Banditwattanawong, to further require training for instructors/evaluators who are outside the norm or not consistent in grading and tracking in a database lessons evaluators need as disclosed in Nemeth, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 17, Delisle and Banditwattanawong and Nemeth disclose: A method for assessing instructor ratings (Delisle – see par 27 (in ‘182 or provisional ‘671) - the Al module communicates with the instructor operating station (IOS) 1600 to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model; In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks), comprising: The remaining limitations are similar to claim 1 above and are rejected for the same reasons over Delisle and Banditwattanawong and Nemeth. Concerning independent claim 20, Delisle and Banditwattanawong and Nemeth disclose: A program product for assessing instructor ratings comprising a non-transitory computer readable storage medium storing code, the code being configured to be executable by a processor to perform operations comprising (Delisle – see par 75 (in ‘182 or provisional ‘671) - These methods can be implemented in hardware, software, firmware or as any suitable combination thereof. That is, if implemented as software, the computer-readable medium comprises instructions in code which when loaded into memory and executed on a processor of a computing device causes the computing device to perform any of the foregoing method steps) to: The remaining limitations are similar to claim 1 above and are rejected for the same reasons over Delisle and Banditwattanawong and Nemeth. Concerning claims 2 and 18, Delisle and Banditwattanawong discloses: The apparatus of claim 1, wherein the first central tendency measure is the mean of the defined rating scale (Banditwattanawong - Banditwattanawong discloses the limitations based on broadest reasonable interpretation in light of the specification – see page 2, Section 4 - Z score is a measure of how many standard deviations below or above the population mean a raw score is. Z score (z) is technically defined in (1) as the signed fractional number of standard deviation σ by which the value of an observation or a data point x is above the mean value µ of what is being observed or measured. PNG media_image1.png 52 284 media_image1.png Greyscale See page 3, section 6.1 – Evaluation, Normal Distribution - The ND data set has normal curve distribution, which renders a symmetric bell shape according to a PNG media_image2.png 40 240 media_image2.png Greyscale (Examiner notes both equations above consider the “mean” or average). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Concerning claim 7, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein the plurality of ratings corresponds to an evaluation period (See Delisle – see par 19 - In this specification, the expression “student” is used in an expansive sense to also encompass any person who is training to improve or hone knowledge, skills or aptitude in the operation of the actual machine such as, for example, a licensed pilot who is doing periodic training for certification purposes. See also Baidwattanawong – see page 3, col. 1, section 6 - The fourth data set, RD-, was collected from a group of real learners taking the same undergrad course in 2019 academic year.). Concerning claim 8, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein the required instructor training is determined based on a magnitude of the rating skewness in the skewness comparison report (Banditwattanawong – see page 3, col. 2 - The results of each method had their quality assessed based on DBI as if the grades represented distinct clusters. The underlying reason of using BDI as the quality indicator in norm-referenced grading is intuitive that is learners with highly similar achievement should receive the same grade, and different grades must able to discriminate achievements between the groups of learners as much clearly as possible. Recall that a DBI value becomes low if clusters are small and far from one another. see page 4, col. 1, 2nd paragraph - We can see from Table 2 that heuristic method delivered exactly the same results as K-means method. DBI equaled 0.330. Z score method yielded the equivalent DBI of 0.443. It might be questionable from student viewpoint why graders using z score gave learners who scored 78 and 79 the same grades A as that of 84, and 47 mark holder the same grade F as that of 42. Technically answering, because 78 and 79 fell in the same z-score interval of A while 47 fell in the z-score interval of F.). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Concerning claim 9, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein the defined rating scale is a numerical scale from 1 to 5 (Delisle – see par 22 - The instructor may grade performance using any suitable grading, marking or evaluation scheme or methodology; see par 26 - or example, as a simple illustration, if the airspeed falls below the stall speed, the automatic assessment module may assign a failing grade (F). If the airspeed comes to within 5% of the stall speed, the automatic assessment module may assign a poor grade (D). If the airspeed comes to within 5%-10% of the stall speed, the automatic assessment module may assign a mediocre grade (C). If the airspeed is within the acceptable range, the automatic assessment module may assign a good grade (B). If the airspeed is perfectly within the acceptable range, the automatic assessment module may assign an excellent grade (A) See also Banditwattanawong – see page 3, col. 2 - We engaged a grading system that evaluated the scores into 5 grades: A, B, C, D, and F without any class GPA constraint. We made an assumption that there was no skipped grade. We implemented the grading system in 3 ways by using heuristic, z score, and K-means methods separately one by one. We set k to 5 grades for K-means.). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Delisle discloses using any “evaluation scheme” for grading (See par 22) and having grades for excellent to failing (A to F) (par 26). Banditwattanawong improves upon Delisle by explicitly using 5 different kinds of grades. Concerning claim 10, Examiner notes at this time, the topic of what is taught is just “nonfunctional descriptive material” not entitled to patent weight in the current claim, as a human having title of “flight” instructor has no functional relationship with the limitations, nor does the time of receipt of the performance ratings being “after a flight” session. Nonetheless, art is still applied: The apparatus of claim 1, wherein: the individual instructor is a flight instructor (Delisle – see 27 (in ‘182 or provisional ‘671) - instructor “grading a particular flight”); and the raw instructor rating data set comprises performance ratings provided by the flight instructor after a flight training session (Delisle par 58 (in ‘182 or provisional ‘671) - The data received by the learner profile module 164 may include student-specific learning data in the form of performance and telemetries related to training sessions, performance and behavior related to learning sessions). Concerning claim 11, Delisle discloses: The apparatus of claim 10, wherein the required instructor training identified for the flight instructor includes aviation-specific teaching methodologies and flight simulation techniques (Delisle par 27 - In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks; For example, the grading feedback to the instructor may indicate if the instructor is an outlier in grading a particular flight maneuver and therefore should recalibrate the subjective evaluation of that particular flight maneuver to better align with other instructor evaluations of that same maneuver and/or the automatic assessments of that same flight maneuver. par 30 - Both the instructor assessment data 126 and the automatic assessment data 124 are provided to the data lake 130 to be accessed by a cloud-based artificial intelligence (Al) module 140. The artificial intelligence module 140 develops an Al assessment model 150 (see FIG. 1, 6) using training sets of instructor assessment data 126 and automatic assessment data 124; see par 57 - The Al student performance assessment module 162, in one embodiment, takes into account automated performance assessments generated by the Virtual Instructor Module 120, which is configured to provide real-time assistance to instructors during simulation training based on the flight telemetries, which assistance can be in the form of audio recommendations based on flight status and performance. See par 66 - A plurality of graphs may also be used in another embodiment. A clustering technique may be used to identify a group of learning behaviors that can be used by the adaptive learning Al module 160 to predict probabilistic outcomes. In one implementation, the adaptive learning Al module 160 can also adapt the current lesson in real-time by increasing or decreasing its difficulty, complexity, etc. The adaptive learning Al module 160 may make these adaptations automatically. In a variant, the adaptive learning Al module 160 may notify the instructor of the adaptation being made and/or request approval from the instructor before implementing the adaptation. See par 70 - The instructor assessment data 126 is received from grading input received from the instructor 110 at the IOS 1600 while the student is training on the simulator in the simulation station 1100. The instructor 110 is a human instructor in this embodiment. Training metadata 125 is also received from the IOS 1600. The virtual instructor 120 is an expert computer system or computer-readable medium that automatically assesses the performance of the student by applying rules to compare flight telemetry 127 with prescribed norms or benchmarks to generate the automatic assessment data 124. See par 71-72 - , the method involves the Al module communicating with the automatic rules-based assessment module to adjust one or more of the rules of the automatic rules-based assessment module in response to detecting a grading discrepancy with the Al assessment model. [0072] In one embodiment, the method involves the Al module communicating with the instructor operating station to display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model.) Concerning claim 12, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein: the memory further stores code executable by the processor to receive additional data sources (Delisle – see FIG. 1 – see par 58 - The learner profile module 164 receives its data from the data lake 130. The data received by the learner profile module 164 may include student-specific learning data in the form of performance and telemetries related to training sessions, performance and behavior related to learning sessions, overall flight history, personality traits, and demographics. The learner profile module 164 provides a complete portrait of the student. ); and the additional data sources are made accessible to one or more end users (Applicant’s [0060] examples include “For example, additional data sources may include trainee feedback surveys, historical skewness comparison reports, records of completed training, etc”) (Delisle discloses the limitations based on broadest reasonable interpretation in light of the specification – see par 49 - an Al Pilot Performance Assessment module 162 may provide to the explainability and pedagogical intervention module 174 data on learning trends and progress metrics broken down by cohort, student, and competency (e.g. ICAO competencies) in absolute numbers or in relation to training curricula and/or metrics of an average population. See par 55 - The Al student performance assessment module 162 outputs data to all modules of the adaptive learning Al module 160 and to the student and instructor dashboards 182, 184. The data output by the Al student performance assessment module 162 may include learning trends and progress metrics broken down by cohort, student, and competency (e.g. ICAO competencies in the specific context of flight training) in raw or absolute numbers and also in relation to training curricula and metrics of an average population of students of which the student being assessed is a member. see FIG. 1, see par 58 - The learner profile module 164 receives its data from the data lake 130. The data received by the learner profile module 164 may include student-specific learning data in the form of performance and telemetries related to training sessions, performance and behavior related to learning sessions, overall flight history, personality traits, and demographics. The learner profile module 164 provides a complete portrait of the student). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Concerning claim 13, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein the skewness comparison report comprises a visual representation of the rating skewness of the raw instructors rating data set (Delisle – see par 27 (in ‘182 or provisional ‘671) - display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks. Banditwattanawong – see FIG. 1 – distribution of normal data; FIG. 2 – distribution of SD+ (positively skewed); FIG. 3 – distribution of SD- data set). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Concerning claim 14, Delisle and Banditwattanawong disclose: The apparatus of claim 1, wherein the memory further stores code executable by the processor to: receive a raw instructor rating data set corresponding to each one of a plurality of individual instructors (Delisle – see par 27 - This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks.); generate a skewness comparison report for each individual instructor of the plurality of individual instructors (Banditwattanawong - see page 4, section 6.2 – Positively Skewed Distribution - Positively skewed distribution is an asymmetric bell shape skewed to the left probably caused by overly difficult exam questions from the viewpoint of learners. Figure 2 depicts the normal distribution of the SD+ set. The skewness equals 1.006. See page 5, Section 6.3 – Negatively skewed distribution - Negatively skewed distribution is an asymmetric bell shape skewed to the right probably caused by overly easy exam questions from the viewpoint of learners. Figure 3 depicts the normal distribution of the SD- set. The skewness equals -1.078); and analyze the skewness comparison reports generated for each individual instructor of the plurality of individual instructors (Banditwattanawong - - see page 1, col. 2, 2nd paragraph - contribution of this paper is the novel insight into heuristic, statistical, and machine learning methods comparatively applied to the unconditionally norm-referenced grading of various data distribution characteristics; see page 4, section 6.2 – Positively Skewed Distribution - Positively skewed distribution is an asymmetric bell shape skewed to the left probably caused by overly difficult exam questions from the viewpoint of learners. Figure 2 depicts the normal distribution of the SD+ set. The skewness equals 1.006. See page 5, Section 6.3 – Negatively skewed distribution - Negatively skewed distribution is an asymmetric bell shape skewed to the right probably caused by overly easy exam questions from the viewpoint of learners. Figure 3 depicts the normal distribution of the SD- set. The skewness equals -1.078); and compare performance of each individual instructor to others of the plurality of individual instructors (Delisle – see par 27 (in ‘182 or provisional ‘671) - display grading feedback to the instructor in response to detecting a grading discrepancy with the Al assessment model. In so doing, the Al module provides feedback to the instructor to enable the instructor to calibrate his or her grading. This feedback enables the instructor to recognize if he or she is being too lax or too strict in evaluating student performance in various tasks. For example, the grading feedback to the instructor may indicate if the instructor is an outlier in grading a particular flight maneuver and therefore should recalibrate the subjective evaluation of that particular flight maneuver to better align with other instructor evaluations of that same maneuver and/or the automatic assessments of that same flight maneuver). It would have been obvious to combine Delisle and Banditwattanawong for the same reasons as claim 1 above. Claims 3-6, 15-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Delisle (WO 2024/031182 – as also evidenced by priority document 63/370,671) and Banditwattanawong, et al, "Norm-referenced achievement grading of normal, skewed, and imperfectly normal distributions based on machine learning versus statistical techniques," 2020 IEEE Conference on Computer Applications (ICCA), pages 1-8 and Nemeth (US 2008/0286727), as applied above to claims 1-2, 7-14, 17-18, and 20, and further in view of Leckie et al., “Rater Effects on Essay Scoring: A Multilevel Analysis of Severity Drift, Central Tendency, and Rater Experience,” 2011, Journal of Educational Measurement, Vol. 48, No. 4, pages 399-418. Concerning claims 3 and 19, Delisle discloses analyzing if instructor is too lax or strict in evaluating student performance (See par 27). Banditwattanawong discloses calculating a Z-score by subtracting scores relative to the “mean” (a central tendency measure) and then divided by the “standard deviation” of the ratings. (See page 2) or (see page 3 – Normal Distribution formula divides by standard deviation) and assessing positive or negative skew in grade distributions. Nemeth discloses analyzing evaluators relative to statistical deviation from the mean or being outside the norm (e.g. beyond a certain amount of standard deviation) (See par 8-9, 11-12). Leckie discloses: The apparatus of claim 1, wherein the first central tendency measure is the mode of the defined rating scale (Leckie – see page 400, “Central Tendency” - Central tendency is the propensity to award a restricted range of scores around the mean (or mode or median) and to avoid awarding extreme scores (Saal, Downey, & Lahey, 1980). Delisle, Banditwattanawong, Nemeth, and Leckie are analogous art as they are directed to analyzing scoring/grades from instructors/raters (see Delisle Abstract, par 27; Banditwattanawong Abstract; Leckie Abstract). Delisle discloses detecting instructors who are too or strict in evaluating student performance (See par 27). Nemeth discloses analyzing evaluators relative to statistical deviation from the mean or being outside the norm (e.g. beyond a certain amount of standard deviation) (See par 8-9). Leckie improves upon Delisle and Banditwattanawong and Nemeth by disclosing the central tendency measure can be either of mean, median, or mode. One of ordinary skill in the art would be motivated to further include using median or mode to efficiently improve upon analyzing instructors who are too lax or strict in Delisle and the assessment of grade distributions considering standard deviation and skewness as disclosed in Banditwattanawong, and the assessment of statistical deviation from the mean/norm (See par 8-9). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructor evaluation as being too lax or too strict in Delisle (See par 27), to further assess grade distributions considering standard deviation and skewness as disclosed in Banditwattanawong, to assess evaluators relative to standard deviations from a mean in Nemeth, and to further use either median/mode as the central tendency measure as disclosed in Leckie, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning claim 4, Delisle, Banditwattanawong, and Leckie disclose: The apparatus of claim 1, wherein the second central tendency measure is the mode of the plurality of ratings from the individual instructor (Leckie – see page 400, “Central Tendency” - Central tendency is the propensity to award a restricted range of scores around the mean (or mode or median) and to avoid awarding extreme scores (Saal, Downey, & Lahey, 1980)). It would have been obvious to combine Delisle, Banditwattanawong, Nemeth, and Leckie for the same reasons as claim 3 above. Concerning claim 5, Delisle, Banditwattanawong, and Leckie disclose: The apparatus of claim 1, wherein the second central tendency measure is the median of the plurality of ratings from the individual instructor (Leckie – see page 400, “Central Tendency” - Central tendency is the propensity to award a restricted range of scores around the mean (or mode or median) and to avoid awarding extreme scores (Saal, Downey, & Lahey, 1980)). It would have been obvious to combine Delisle, Banditwattanawong, Nemeth, and Leckie for the same reasons as claim 3 above. Concerning claim 6, Delisle discloses analyzing if instructor is too lax or strict in evaluating student performance (See par 27). Banditwattanawong discloses calculating a Z-score by subtracting scores relative to the “mean” (a central tendency measure) and then divided by the “standard deviation” of the ratings. (See page 2) or (see page 3 – Normal Distribution formula divides by standard deviation) and assessing positive or negative skew in grade distributions. Nemeth discloses having reports indicating evaluators that are different statistical deviations (1, 1.5, 2, etc) from the norm (See par 14, FIG. 2, #229). Leckie discloses: The apparatus of claim 1, wherein the first central tendency measure is a predetermined reference value (Leckie see page 409, last paragraph – If there were a central tendency bias, the data points in each scatter plot would show a negative slope; this would indicate that essays that were scored low by the expert committee were overscored by the raters and vice versa (essays that were scored high by the expert committee were underscored by the raters)). It would have been obvious to combine Delisle, Banditwattanawong, Nemeth, and Leckie for the same reasons as claim 3 above. In addition, one of ordinary skill in the art would be motivated to further include using a reference value (a negative scatter plot) to efficiently improve upon analyzing instructors who are too lax or strict in Delisle and the assessment of grade distributions considering standard deviation and skewness as disclosed in Banditwattanawong, or a report on evaluators who are different levels of deviation away from the norm in Nemeth. Concerning claim 15, Delisle and Banditwattanawong and Nemeth disclose: The apparatus of claim 1, wherein: receiving a raw instructor rating data set comprises receiving a raw instructor rating data set comprising a plurality of ratings corresponding to a plurality of evaluation periods (Delisle – see par 22 (in ‘182 or provisional ‘671) - In the embodiment depicted in FIG. 1 , the system 100 includes an instructor operating station (IOS) 1600 communicatively connected to the interactive computer simulation station 1100 to receive instructor assessment data from an instructor at the IOS 1600, which are stored in an instructor assessment data storage 126. The instructor data may be input manually by the instructor via an instructor computing device during the simulation. The instructor may grade performance using any suitable grading, marking or evaluation scheme or methodology); and the memory further stores code executable by the processor to: track changes in the rating skewness of the plurality of ratings from each one of the plurality of evaluation periods (See Delisle – see par 19 - In this specification, the expression “student” is used in an expansive sense to also encompass any person who is training to improve or hone knowledge, skills or aptitude in the operation of the actual machine such as, for example, a licensed pilot who is doing periodic training for certification purposes. See also Baidwattanawong – see page 3, col. 1, section 6 - The fourth data set, RD-, was collected from a group of real learners taking the same undergrad course in 2019 academic year; Nemeth – see par 14 - as time increases and training gets more and more consistent, reports could be generated indicating evaluators that were 0.5 statistical deviations from the norm instead of 1 statistical deviation) Delisle discloses giving grading feedback to the instructor to recalibrate the subjective evaluation of the instructor relative to other instructor evaluations (See par 27). Baidwattanawong discloses looking at distribution of grades from instructors. Leckie discloses: receiving a raw instructor rating data set comprises receiving a raw instructor rating data set comprising a plurality of ratings corresponding to a plurality of evaluation periods (Leckie -see page 400, 1st paragraph - Their finding suggests that, even within a testing and scoring environment, different raters may have different trends in their scoring over time) adjust the required instructor training based on observed changes in the rating skewness over time (Leckie - see page 414, last paragraph - In terms of raters’ individual severity trends, we found that these trends significantly fanned in over time (Table 2 and Figure 3). This result indicated that raters became more homogenous the more essays they scored. Hoskens & Wilson (2001) argued that a drift toward the mean was caused in their study by feedback to raters on their performances. This causal explanation also is possible for our results, as feedback was given to raters following poor performance.). It would have been obvious to combine Delisle and Banditwattanawong and Nemeth for the same reasons as claim 1 above; and with Leckie as in claim 3. In addition, Delisle discloses giving grading feedback to the instructor to recalibrate the subjective evaluation of the instructor relative to other instructor evaluations (See par 27). Leckie improves upon Delisle and Banditwattanawong and Nemeth by disclosing having rater trends over time and giving different feedback to raters on their performances (See page 400, 404, 414). One of ordinary skill in the art would be motivated to further include selecting feedback for an instructor based on severe they are scoring or how their scoring is drifting to efficiently improve upon general feedback to instructors who are too lax or strict in Delisle and the assessment of grade distributions considering standard deviation and skewness, and helping grades with selection of right grading methods (See page 1) as disclosed in Banditwattanawong. Concerning claim 16, Delisle and Banditwattanawong disclose: The apparatus of claim 15, wherein the memory further stores code executable by the processor to: analyze the rating skewness of the plurality of ratings from each one of the plurality of evaluation periods to identify trends in the rating skewness over the plurality of evaluation periods (Leckie -see page 400, 1st paragraph - Their finding suggests that, even within a testing and scoring environment, different raters may have different trends in their scoring over time. Indeed, Myford and Wolfe’s (2009) study of 101 raters and 28 check essays representative samples of students’ work) found significant positive and negative drift in rater accuracy over time for a small proportion of their raters. See page 406, 3rd paragraph - The advantage of this specification was that the model directly gives us an overall average linear time trend. The slope of this overall average linear time trend allowed a simple test of whether raters, on average, became significantly more (or less) severe across the five checks.); and generate a training recommendation based on the identified trends (Leckie – see page 403, last paragraph – page 404, 1st paragraph - However, team leaders additionally acted as mentors to the experienced and new raters and communicated feedback to them about the quality of their scoring. To fulfill this role, team leaders needed to observe the scores that the raters in their teams assigned to essays; see page 414, last paragraph - In terms of raters’ individual severity trends, we found that these trends significantly fanned in over time (Table 2 and Figure 3). This result indicated that raters became more homogenous the more essays they scored. Hoskens & Wilson (2001) argued that a drift toward the mean was caused in their study by feedback to raters on their performances. This causal explanation also is possible for our results, as feedback was given to raters following poor performance.). It would have been obvious to combine Delisle and Banditwattanawong and Nemeth and Leckie for the same reasons as claim 15 above. Response to Arguments Applicant's arguments filed 1/7/26 have been fully considered but they are not persuasive and/or are moot in view of the new rejections. With regards to 101, Applicant argues the claim is not directed to an abstract idea, in part because the claims are directed to a technological process of “control automated system behavior, including verification of training completion and updating of instructor training status information stored in memory” as the math is for a “control input for system state changes.” Remarks, page 10. In response, Examiner respectfully disagrees. First, the arguments that this becomes “control” of automated system behavior are not persuasive. There is no “control” in the claims. Examiner’s best guess is that Applicant is referring to computer performing “automated” action of either 1) notifying instructor training is required, 2) scheduling instructor for required training, 3) notifying instructor required training scheduled, or 4) transmitting rating skewness, report, and instructor training information to one or more end users. The additional element added here involves a computer to help further conduct the abstract idea – a certain method of organizing human activity to help with teaching people, and ensuring instructors get required training. See MPEP 2106.05f “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253.” Examiner further notes that method claim 17 has contingent limitations; the fourth limitation is “if” instructor requires training; thus, many of the “automated” action limitations are not even required for the method claim. With regards to step 2a, prong two, Applicant argues the claims have a “sequence of automated operations” for “controlling” system behavior and update training status information “in memory.” Remarks, page 11. In response, Examiner respectfully disagrees. The contingent operations here are ordinary computer operations of notify/transmit/store. This is not viewed as a practical application. See MPEP 2106.05f “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253.” Applicant further argues this a “control” of system behavior, as if this is a “control” case that is practical application. This is not persuasive here. This is not similar to MPEP 2106.05e “Other Meaningful Limitations” where it discusses how “opening and closing a mold” led to eligibility. “In Diehr, the claim was directed to the use of the Arrhenius equation (an abstract idea or law of nature) in an automated process for operating a rubber-molding press. 450 U.S. at 177-78, 209 USPQ at 4. The Court evaluated additional elements such as the steps of installing rubber in a press, closing the mold, constantly measuring the temperature in the mold, and automatically opening the press at the proper time, and found them to be meaningful because they sufficiently limited the use of the mathematical equation to the practical application of molding rubber products. 450 U.S. at 184.” This is also not persuasive based on MPEP 2106.05a (Improvements to Computer Functionality” as this is more similar to examples “not” sufficient to show improvement in computer-functionality. ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089; vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018). Examiner further notes that method claim 17 has contingent limitations; the fourth limitation is “if” instructor requires training; thus, many of the “automated” action limitations are not even required for the method claim. Applicant then argues that the claimed “notification” or scheduling” are not insignificant post-solution activity, as they are similar to “integration requirement,” and is instead a practical application. Remarks, page 11. In response, Examiner respectfully disagrees. First, in every claim, “scheduling” or “notifying” are alternatives not even required. Second, even if they were required, mere notification to a user to do extra educational learning to help them be a better grader/teacher and scheduling. Merely giving a “notification” of a scheduled instructor class is not an additional element beyond “using a computer to display the information.” Computing technology is not improved the way the claim is constructed. See e.g. MPEP 2106.04(a)(2)(II)(C) “An example of a claim reciting managing personal behavior is Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 115 USPQ2d 1636 (Fed. Cir. 2015). The patentee in this case claimed methods comprising storing user-selected pre-set limits on spending in a database, and when one of the limits is reached, communicating a notification to the user via a device. 792 F.3d. at 1367, 115 USPQ2d at 1639-40. The Federal Circuit determined that the claims were directed to the abstract idea of "tracking financial transactions to determine whether they exceed a pre-set spending limit (i.e., budgeting)", which "is not meaningfully different from the ideas found to be abstract in other cases before the Supreme Court and our court involving methods of organizing human activity." 792 F.3d. at 1367-68, 115 USPQ2d at 1640.” Applicant then argues that the claims are not conventional, evidence is required, and the claims are “significantly more” than any alleged judicial exception. Remarks, page 12. In response, Examiner respectfully disagrees. With regards to step 2B, only those additional elements (analyzed under 2B) that are deemed “conventional” need to comply with Berkheimer. When elements are just part of “apply it” [abstract idea] on a computer, under MPEP 2106.05(f); or “field of use” under MPEP 2106.05h, no evidence is needed. In addition, citations were already provided to MPEP 2106.05(d)(II) for how a conventional computer function is “Receiving or transmitting data over a network, e.g., using the Internet to gather data,” and “Storing and retrieving information in memory”. With regards to 103, Applicant argues that Banditwattanawong does not “addressing rating data constrained by a defined rating scale.” Remarks, page 13. In response, Examiner respectfully disagrees. First, Delisle was applied for having a grade performance according to a suitable grading, marking, or evaluation scheme or methodology (See par 22). Second, Banditwattanawong discloses also having claimed “rating scale” – see e.g. page 3, Col. 2, “We engaged a grading system that evaluated the scores into 5 grades: A, B, C, D, and F.” This was also cited for claim 9. Applicant argues that “the application instead discloses anchoring skewness calculation to a central tendency of the defined rating scale, rather than relying exclusively on data-derived measures (see [0003], [0052]).” Remarks, page 14. In response, Examiner respectfully disagrees. First, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “not” data-derived measures) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Second, it is unclear what Applicant may be referring to relative to the claims as paragraph [0052] is using a “defined rating scale” of 1 to 5, and examples of “central tendency measure” applied in the prior art (e.g. instructor evaluations alignment with other instructor evaluations in Delisle; “average”, Z-score equation, and norm-referenced grade distribution characteristics in Banditwattanawong) are the same as the ones that Applicant uses in the specification. Applicants’ remaining arguments are moot over the new rejections citing new art. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
May 02, 2025
Non-Final Rejection — §101, §103
Aug 07, 2025
Response Filed
Oct 03, 2025
Non-Final Rejection — §101, §103
Jan 07, 2026
Response Filed
Mar 02, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month