Prosecution Insights
Last updated: April 19, 2026
Application No. 18/897,055

METHODS AND SYSTEMS FOR OPTIMIZING PERFORMANCE OF ENTERPRISE OPERATIONS USING MATURITY ASSESMENT

Final Rejection §101
Filed
Sep 26, 2024
Examiner
CRANDALL, RICHARD W.
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tata Consultancy Services Limited
OA Round
2 (Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
90 granted / 301 resolved
-22.1% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
343
Total Applications
across all art units

Statute-Specific Performance

§101
34.6%
-5.4% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 301 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to correspondence received December 3, 2025. Claims 1, 6, 8, 13, 15, and 19 are amended. Claims 2, 9, and 16 are canceled. Claims 1, 3-8, 10-15, and 17-20 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10-15, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) Claims 1, 8, and 15, which are similar in scope, recite the following abstract idea: A method, comprising the steps of: selecting, , (i) an industry type from a list of industry types including banking and financial services, consumer packaged goods, energy and resources, healthcare, high-tech and professional services, insurance, life sciences, manufacturing, media and information services, retail, telecom, travel, transportation and hospitality, and utilities, (ii) an enterprise operation of one or more enterprise operations associated with each industry type, and (iii) an enterprise persona of one or more enterprise personas associated with each enterprise operation of the one or more enterprise operations, whose maturity is to be assessed and a performance is to be optimized and the one or more enterprise operations comprises one or more defined business operations and one or more defined information technology (IT) operations, wherein the industry type, the enterprise operation and the enterprise persona is selected by a user; populating dynamically one after other, , one or more performance evaluators, based on an industry type, the enterprise operation, and the enterprise persona selected, whose maturity is to be assessed and the performance is to be optimized; providing, , an individual performance value for each performance evaluator of the one or more performance evaluators, wherein the individual performance value for each performance evaluator is an actual performance level and provided from a predefined performance range defined for each performance evaluator; computing, an individual maturity score for each performance evaluator of the one or more performance evaluators, based on (i) the individual performance value for an associated performance evaluator, (ii) a plurality of maturity levels defined for the associated performance evaluator, and (iii) one or more maturity ratings defined for the associated performance evaluator, wherein the plurality of maturity levels pertains to initial (L1), resilient (L2), adaptive (L3), and cognitive (L4), and wherein the initial (L) is a lowest maturity level and the cognitive (L4) is a highest maturity level; calculating, an enterprise operation maturity score for each enterprise operation selected, based on the individual maturity score for each performance evaluator of the one or more performance evaluators; and dynamically providing, , one or more recommended solutions for each performance evaluator, based on the individual maturity score for the associated performance evaluator, to improve the maturity score for each enterprise operation selected, wherein the one or more recommended solutions for each performance evaluator defined for either an incremental improvement or a best-in-class level improvement in the individual maturity score of each enterprise operation selected, wherein when the current maturity level of the enterprise operation through each performance evaluators stands at initial (L1), then the one or more recommended solutions for each performance evaluator are defined for either the incremental improvement as the resilient (L2), or the best-in-class level improvement as the cognitive (L4) in the individual maturity score of each enterprise operation selected, wherein the one or more recommended solutions for each performance evaluator, the enterprise operation maturity score of each enterprise operation and the individual maturity score for each performance evaluator of the one or more performance evaluators are stored for further use; simulating, , the one or more recommended solutions provided for each performance evaluator, to validate the one or more recommended solutions provided for each performance evaluator, wherein the simulation comprises: choosing one or more relevant recommended solutions for each performance evaluator out of the one or more recommended solutions provided for the associated performance evaluator based on a relevance computing an individual simulated maturity score for each performance evaluator, based on (i) a current individual performance metric for the associated performance evaluator, (ii) the plurality of maturity levels defined for the associated performance evaluator, (iii) the one or more relevant recommended solutions provided for the associated performance evaluator, and (iv) an impact weight value defined for each of the one or more relevant recommended solutions provided for the associated performance evaluator and calculating an enterprise operation simulated maturity score for each enterprise operation, based on the individual simulated maturity score for each performance evaluator of the one or more performance evaluators, to allow for optimizing performance of each enterprise operation; , [presenting] the one or more recommended solutions in an enterprise operation maturity assessment report , wherein the enterprise operation maturity assessment report includes the enterprise operation maturity score of each enterprise operation and laggards and best-in-class benchmarks for the individual maturity score for each performance evaluator of the one or more performance evaluators. These steps are a mental process as they can be performed, one after another, mentally, or with pen and paper. This includes dynamic steps, for mental processes are inherently dynamic, as one formulates thoughts then expresses them, which is dynamic (“dynamically, one after the other..”). The simulation is a mental process because as presented above it is a series of evaluation and judgement steps that one could do dynamically with one’s mind or with pen and paper. These steps are also a certain method of organizing human activity – business relations, as these are steps to evaluate performance of various aspects of an enterprise. Moreover, the steps above cannot be conceived as anything else – it is not a technical process or anything except steps for business evaluation. Therefore, the steps can only be understood as a patent ineligible abstract idea. This judicial exception is not integrated into a practical application. The additional elements in combination amount to no more than applying generic computing components and machinery used in its ordinary capacity to the abstract idea. This is both alone and in combination. In combination, the elements of generic computing (‘processor implemented; system; CRM) plus storing in a knowledge engine that has cognitive intelligence for performing a plurality of features and dynamic decision making; and displaying in a GUI, amount to a computer with a GUI display and storing in a knowledge engine that has various capabilities. In other words, a computer with a screen and an engine element that has software on it that is only defined in terms of its functional result as well as the ability to store data (for example, accessing stored data on a drive). This is tantamount to reciting a computer with software and data stored on it. See MPEP 2106.05(f)(1-2). The additional elements of claim 1 are: processor-implemented via one or more hardware processors displaying through graphical user interface storing in a knowledge engine for further use, wherein the knowledge engine includes cognitive intelligence for performing a plurality of features and dynamic decision making The additional elements of claim 8 are: A system comprising: a memory storing instructions; one or more input/output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to Storing in a knowledge engine for further use, wherein the knowledge engine includes cognitive intelligence Display through a graphical interface The additional elements of claim 15: One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause: Storing in a knowledge engine for further use, wherein the knowledge engine includes cognitive intelligence Display through a graphical interface As these amount to no more than instructions to perform the abstract idea on a computer, they are not a practical application of the abstract idea The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the reasoning in prong 2 (practical application) is carried over here. For the same reasons that it is not a practical application that the combination of elements amounts to applying an abstract idea to a computer, it is not significantly more. Therefore, the independent claims recite patent ineligible subject matter. Claims 3-6; 10-13; 17-19 further define the abstract idea. A simulation is merely stepping through rules, under a broadest reasonable interpretation. See for example how the simulation is defined at the end of the independent claims, merely a series of steps one could follow mentally, or on paper, or alternatively as a commercial interaction as a certain method of organizing human activity (evaluating a business). Claims 7, 14, and 20 further define the abstract idea but also recite using a benefit computation algorithm. Using a benefit computation algorithm is applying an algorithm which may be an abstract idea or alternatively is an apply it limitation. As the benefit computation algorithm is broadly defined, either interpretation is appropriate. Therefore, claims 1, 3-8, 10-15, and 17-20 are rejected under 35 USC 101. Response to Remarks 35 USC 101 Examiner has carefully considered Applicant’s arguments in full, the responses below: Applicant asserts that integration of judicial exception into the practical application is achieved in terms of an improvement to computing technology and/or improving the functionality of the computer (MIPEP @@ 2106.04(d)(1) and 2106.05(a)) with the capability of computing maturity score using actual performance values drawn from predefined ranges and at four maturity levels as stated in the claim limitation, "computing, via the one or more hardware processors,… highest maturity level.” Applicant’s reasoning is: “improving the functionality of the computer (MPEP §§ 2106.04(d)(1) and 2106.05(a)) with the capability of transitioning of the one or more recommended solutions for each performance evaluator as either an incremental improvement as the resilient(L2), or a best-in-class level improvement as the cognitive (L4) in the individual maturity score of each enterprise operation selected, when a current maturity level of the enterprise operation through each performance evaluator stands at the initial (L1).” The initial reasoning is unpersuasive. The improvement, if any, is to determining performance evaluators. This is using a computer to perform this step, which is something that would be found in business consulting, but its field is unimportant because no one ordinarily skilled in a patentable subject matter would recognize this as a technical improvement. There is no specification support for a technical improvement, nor is one clear on its face, nor has Applicant asserted one. These limitations are the abstract idea and are not technical in nature. See MPEP 2106.05(a) where one ordinarily skilled would have to recognize the technical improvement. Applicant then remarks: “Further, using knowledge engine with cognitive intelligence for dynamic decision making to store recommended solutions and regularly updating the knowledge engine for future use. Therefore…dynamic decision making.” This is also unpersuasive as this is storing “For further use” of which is not claimed. Storing data without more is not a technical improvement of a computer or computer related component. Applicant then remarks: “Applicant asserts that integration of judicial exception into the practical application is achieved in terms of an improvement to computing technology and/or improving the functionality of the computer (MPEP §§ 2106.04(d)(1) and 2106.05(a)) with the capability of improving the value of the maturity score by simulation and optimizing the performance of such enterprise operation efficiently.” This is not persuasive as the only thing that is made more efficient is the value of the maturity score. This is similar to “A process for monitoring audit log data that is executed on a general-purpose computer where the increased speed in the process comes solely from the capabilities of the general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)” MPEP 2106.05(f)(2). Just like in that case, Applicant’s only claim to improvement is that “a process is made faster.” However, that is due to the computer, and does not improve the computer. Therefore this is unpersuasive. Applicant then “asserts technical advancement” but this assertion is not one that one ordinarily skilled in the art would recognize. See Id. Applicant then argues: “Applicant asserts that an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception with the capability.” Then Applicant states that the other meaningful way is the displaying step which is an apply it step as generic computers (smartphones, laptops, etc etc) have screens to display the results of the calculations. Every one of Applicant’s paragraphs that have been included in the argument section as well as the included diagrams have been reviewed and none of them are persuasive as the application, as one ordinarily skilled in the art would recognize, is just steps for making performance evaluations and calculations for businesses that is applied to a computer. This process is almost entirely an abstract idea (see rejection, step 2A prong 1) with minimal computer elements applied to it. Applicant then argues Example 39 which is inapt as there is no neural networking improvement nor analog. The only improvement, which is non technical here, is to a process for optimizing enterprise operation maturity assessment report and other terms, all of which can be seen in the identified abstract idea in the rejection above. Applicant then argues Core Wireless but there is not technical limitation to a small screen or otherwise, no specific GUI limitations, or analogs to this. None of the comparisons made by Applicant are persuasive as they are trying to compare the abstract idea of these claims (see elements identified as abstract idea, this is maintained by Examiner) to what the court found as not an abstract idea in Core Wireless. The only thing in common is both have “displaying” (regarding Applicant’s argument) and that is insufficient because generic computers display data. Also see Electric Power Group which is similar to Applicant’s claims: collecting data, analyzing data, and displaying the results. The 2B arguments are unpersuasive as they attempt to put the abstract idea, in a way, into significantly more, which is only for the combination of additional elements. Steps for simulating performance evaluation as claimed by applicant in the language claimed by applicant as shown above in the 101 rejection are the abstract idea and have no bearing on the significantly more step. They are properly identified at the abstract stage. Nothing which the Applicant identified as significantly more is significantly more than the abstract idea and most of it is the abstract idea. Which parts are which are properly identified in the above rejection and that rejection is maintained. Applicant’s disagreement of identifying of abstract idea limitations and additional elements is noted but is unpersuasive as it does not follow the guidance set by the USPTO. Examiner has carefully considered the arguments but this is a clear 101 with no ambiguity or “grey area” and is maintained. Therefore the 101 is affirmed. 35 USC 102 Applicant has overcome the prior art rejection through amendment. Prior Art Considered Relevant The following prior art is considered relevant to Applicant’s claims: Manjarekar, US PGPUB 20140142990 A1, teaches in par 023 a dashboard to view different kinds of business data and in Table 1 insurance related KPI. Marks et al., US PGPUB 20160110664 A1, teaches in Table 1 evaluations of people in terms of compliance with Sarbanes-Oxley. Mayerle, US PGPUB 20130346161 A1, teaches root cause analysis for benchmarking for particular KPIs in par 051, including for manufacturing or accounting or sales, see par 028. Ananthan, Ray, “How to choose the right API metrics to track integration performance,” Mulesoft Blog [online] published November 22, 2021, available at: < https://blogs.mulesoft.com/api-integration/api-metrics-to-track-integration-performance/ > Teaches using API metrics to focus on trends to select metrics which are quantifiable and quantitative. Including financial metrics. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD W. CRANDALL whose telephone number is (313)446-6562. The examiner can normally be reached M - F, 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at (571) 270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD W. CRANDALL/ Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Aug 30, 2025
Non-Final Rejection — §101
Dec 03, 2025
Response Filed
Feb 11, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602666
INFORMATION HANDLING SYSTEM MICRO MANUFACTURING CENTER FOR REUSE AND RECYCLING FACTORING INVENTORY
2y 5m to grant Granted Apr 14, 2026
Patent 12591589
DECENTRALIZED WILL MANAGEMENT APPARATUS, SYSTEMS AND RELATED METHODS OF USE
2y 5m to grant Granted Mar 31, 2026
Patent 12541382
USER PERSONA INJECTION FOR TASK-ORIENTED VIRTUAL ASSISTANTS
2y 5m to grant Granted Feb 03, 2026
Patent 12537090
METHOD AND SYSTEM FOR RULE-BASED ANONYMIZED DISPLAY AND DATA EXPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12530694
USING ENTITLEMENTS DEPLOYED ON BLOCKCHAIN TO MANAGE CUSTOMER EXPERIENCES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
64%
With Interview (+33.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 301 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month