Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,006

METHOD FOR ONLINE EVALUATION AND ONLINE SERVER FOR EVALUATION

Final Rejection §101
Filed
Jun 16, 2023
Examiner
GURSKI, AMANDA KAREN
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
VISITS Technologies, Inc.
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
66%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
129 granted / 398 resolved
-19.6% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
30 currently pending
Career history
428
Total Applications
across all art units

Statute-Specific Performance

§101
39.4%
-0.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 398 resolved cases

Office Action

§101
DETAILED ACTION This office action is in response to communication filed on 11 December 2025. Claims 1, 4, 7 – 15, and 18 – 22 are presented for examination. The following is a FINAL office action upon examination of application number 18/211006. Claims 1, 4, 7 – 15, and 18 – 22 are pending in the application and have been examined on the merits discussed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In the response filed 11 December 2025, Applicant amended claims 1, 4, 12, and 15. Applicant cancelled claims 2, 3, 5, 6, 16, and 17. Amendments to claims 1, 4, 12, and 15 are insufficient to overcome the 35 USC § 101 rejection. Therefore, the 35 USC § 101 rejection of claims 1, 4, 7 – 15, and 18 – 22 are maintained. Response to Arguments Applicant's arguments filed 11 December 2025 have been fully considered but they are not persuasive. In the remarks regarding the 35 USC 101 rejection, Applicant argues that claims now recite improvement to a technical field or technology. Examiner respectfully disagrees. Applicant describes improvements to the scoring technique where it is more reliable, objective, stable, efficiently processed, reproducibility, and the explainability and transparency of score calculation. These improvements, while not currently claimed, still describe improvement to the business method and not any technology where it is implemented. While improvements to the scoring technique are desirable in the real world, with respect to subject matter eligibility, it is not enough to amount to significantly more than the abstract ideas claimed throughout. The breakdown of these problems and improvements is greatly appreciated, however, the argued technical improvement is either unclaimed or it is an improvement to the abstract idea itself. Applicant describe how amended limitations in independent claims “provide a more balanced and reliable evaluation.” That may be true, but again, that is improvement to evaluating, which is an abstract function and not a technical field or technology. Claims remain properly rejected under 35 USC 101 for being directed to abstract ideas without significantly more. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4, 7 – 15, and 18 – 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the judicial exception of abstract ideas without significantly more. The claims recite allocating evaluators who should evaluate the plurality of evaluation targets that are stored in the evaluation target data storage part and are assigned identifiers, from among a plurality of evaluators who are assigned identifiers as evaluators of a current evaluation session, extracting the data related to the plurality of evaluation targets from the evaluation target data storage part according to a result, extracting question data related to a predetermined theme from a question data storage part, extracting the first format data from the first format data storage part, and transmitting the data related to the plurality of evaluation targets, the question data, and the first format data, receiving the evaluation result data including evaluations of the evaluation targets input by each evaluator in the selective evaluation input section, assigning an identifier to each of the evaluation result data that have been received, and storing the evaluation result data in the evaluation result data storage part in association with the identifier of each evaluator who has transmitted the evaluation result data and the identifier of each evaluation target, analyzing a degree of strictness of the evaluation of each evaluator for each evaluation axis based on the evaluation input in the selective evaluation input section by each evaluator in the evaluation result data stored in the evaluation result data storage part, and calculating a corrected evaluation by correcting the evaluation such that the evaluation by the evaluator who gives a strict evaluation rises relatively and the evaluation by the evaluator who gives a lax evaluation decreases relatively, and storing the corrected evaluation in the evaluation result data storage part in association with the identifier of each evaluator and the identifier of each evaluation target, aggregating the evaluations of each evaluation target based on the corrected evaluation and the identifier of the evaluation target stored in the evaluation result data storage part to calculate the provisional score of each evaluation target for each evaluation axis, and storing the provisional score in the evaluation target score data storage part in association with the identifier of each evaluation target, comparing for each evaluation axis the corrected evaluation of each evaluation target associated with the identifier of the evaluator stored in the evaluation result data storage part with the provisional score of each evaluation target stored in the evaluation target score data storage part, aggregating closeness between them for each evaluator to calculate the evaluation ability score of each evaluator, and storing the evaluation ability score in the evaluator score data storage part in association with the identifier of each evaluator, aggregating the evaluations for each evaluation target based on the corrected evaluation, the identifier of the evaluators and the identifier of the evaluation target stored in the evaluation result data storage part, and the evaluation ability score of each evaluator stored in the evaluator score data storage part, to calculate the corrected score of each evaluation target for each evaluation axis, on condition that a greater weighting is given to the evaluation by the evaluator with a higher evaluation ability score, and storing the corrected score in the evaluation target score data storage part in association with the identifier of each evaluation target, repeating previous 2 steps in order for the evaluation ability score and the corrected score of each evaluation target to converge, wherein repetition of comparison step is terminated when either or both of the following conditions (a) and (b) are satisfied: each time comparison step is repeated, calculating a difference or rate of change for each evaluation axis between a latest evaluation ability score and a previous evaluation ability score of each evaluator, and when judging whether or not the difference or rate of change satisfies a preset condition for each evaluator, the preset condition is satisfied for all the evaluators, extracting either or both of the following data (1) and (2), and transmitting them: (1) data related to the evaluation targets, including the corrected score itself of each evaluation target for each evaluation axis and/or a statistic calculated based on the corrected score, stored in the evaluation target score data storage part. (2) data related to the evaluators, including the evaluation ability score itself of each evaluator and/or a statistic calculated based on the evaluation ability score, stored in the evaluator score data storage part. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance section 2106 of the MPEP (hereinafter, MPEP 2106). With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is noted that the methods and servers (apparatus) are directed to eligible categories of subject matter. Step 1 is satisfied. With respect to Step 2A prong 1 of MPEP 2106, it is next noted that the claims recite an abstract idea by reciting concepts of personal evaluation of an employee, which falls into the “certain methods of organizing human activity” group within the enumerated groupings of abstract ideas set forth in the MPEP 2106, as this pertains to business relations and managing interactions between people. The claimed invention also recites an abstract idea that falls within the mental processes grouping, as claims describe calculating scores and comparison steps which do not require technology to be performed and can be done in the human mind through decision making. The limitations reciting the abstract idea in independent claims are allocating evaluators who should evaluate the plurality of evaluation targets that are stored in the evaluation target data storage part and are assigned identifiers, from among a plurality of evaluators who are assigned identifiers as evaluators of a current evaluation session, extracting the data related to the plurality of evaluation targets from the evaluation target data storage part according to a result, extracting question data related to a predetermined theme from a question data storage part, extracting the first format data from the first format data storage part, and transmitting the data related to the plurality of evaluation targets, the question data, and the first format data, receiving the evaluation result data including evaluations of the evaluation targets input by each evaluator in the selective evaluation input section, assigning an identifier to each of the evaluation result data that have been received, and storing the evaluation result data in the evaluation result data storage part in association with the identifier of each evaluator who has transmitted the evaluation result data and the identifier of each evaluation target, analyzing a degree of strictness of the evaluation of each evaluator for each evaluation axis based on the evaluation input in the selective evaluation input section by each evaluator in the evaluation result data stored in the evaluation result data storage part, and calculating a corrected evaluation by correcting the evaluation such that the evaluation by the evaluator who gives a strict evaluation rises relatively and the evaluation by the evaluator who gives a lax evaluation decreases relatively, and storing the corrected evaluation in the evaluation result data storage part in association with the identifier of each evaluator and the identifier of each evaluation target, aggregating the evaluations of each evaluation target based on the corrected evaluation and the identifier of the evaluation target stored in the evaluation result data storage part to calculate the provisional score of each evaluation target for each evaluation axis, and storing the provisional score in the evaluation target score data storage part in association with the identifier of each evaluation target, comparing for each evaluation axis the corrected evaluation of each evaluation target associated with the identifier of the evaluator stored in the evaluation result data storage part with the provisional score of each evaluation target stored in the evaluation target score data storage part, aggregating closeness between them for each evaluator to calculate the evaluation ability score of each evaluator, and storing the evaluation ability score in the evaluator score data storage part in association with the identifier of each evaluator, aggregating the evaluations for each evaluation target based on the corrected evaluation, the identifier of the evaluators and the identifier of the evaluation target stored in the evaluation result data storage part, and the evaluation ability score of each evaluator stored in the evaluator score data storage part, to calculate the corrected score of each evaluation target for each evaluation axis, on condition that a greater weighting is given to the evaluation by the evaluator with a higher evaluation ability score, and storing the corrected score in the evaluation target score data storage part in association with the identifier of each evaluation target, repeating previous 2 steps in order for the evaluation ability score and the corrected score of each evaluation target to converge, wherein repetition of comparison step is terminated when either or both of the following conditions (a) and (b) are satisfied: each time comparison step is repeated, calculating a difference or rate of change for each evaluation axis between a latest evaluation ability score and a previous evaluation ability score of each evaluator, and when judging whether or not the difference or rate of change satisfies a preset condition for each evaluator, the preset condition is satisfied for all the evaluators, extracting either or both of the following data (1) and (2), and transmitting them: (1) data related to the evaluation targets, including the corrected score itself of each evaluation target for each evaluation axis and/or a statistic calculated based on the corrected score, stored in the evaluation target score data storage part. (2) data related to the evaluators, including the evaluation ability score itself of each evaluator and/or a statistic calculated based on the evaluation ability score, stored in the evaluator score data storage part. With respect to Step 2A Prong Two of the MPEP 2106, the judicial exception is not integrated into a practical application. The additional elements in independent claims are directed to servers, terminals, network, transceiver, control unit, and storage unit, to implement the abstract idea. However, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements to perform the abstract idea, which is not sufficient to amount to a practical application (as noted in the MPEP 2106) and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to: servers, terminals, network, transceiver, control unit, and storage unit. These elements have been considered, but merely serve to tie the invention to a particular operating environment, though at a very high level of generality and without imposing meaningful limitation on the scope of the claim. This does not amount to significantly more than the abstract idea, and it is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the finding for claims above, these claims are similarly directed to the abstract idea of concepts of repetition of independent steps, calculating similarity between targets, and calculating other scores, by way of example, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA GURSKI whose telephone number is (571)270-5961. The examiner can normally be reached Monday to Thursday 7am to 5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMANDA GURSKI/Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §101
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Dec 11, 2025
Response Filed
Jan 09, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596982
SUSTAINABILITY RECOMMENDATIONS FOR HYDROCARBON OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12572865
Automatic and Dynamic Adaptation of Hierarchical Reconciliation for Time Series Forecasting
2y 5m to grant Granted Mar 10, 2026
Patent 12541734
SYSTEMS AND METHODS FOR BOOTSTRAP SCHEDULING
2y 5m to grant Granted Feb 03, 2026
Patent 12481963
PROACTIVE SCHEDULING OF SHARED RESOURCES OR RESPONSIBILITIES
2y 5m to grant Granted Nov 25, 2025
Patent 12387284
UTILIZING DIGITAL SIGNALS TO INTELLIGENTLY MONITOR CLIENT DEVICE TRANSIT PROGRESS AND GENERATE DYNAMIC PUBLIC TRANSIT INTERFACES
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
66%
With Interview (+33.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 398 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month