DETAILED ACTION
This communication is a Final Office Action rejection on the merits. Claims 1, 4-8, 11-15, and 18-20 are currently pending and have been addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 03/13/2025 (related to the 102/103 Rejections) have been fully considered and are persuasive. Although a new prior art is added in this Office Action, Examiner agrees that Viswanath et al. does not anticipate all of the features of amended independent claims 1, 8, and 15. Nor does the remaining prior art of record remedy the deficiencies found in the cited prior art. Therefore, independent claims have potential allowable subject matter as the prior art references do not teach the invention individually nor in combination. Also, dependent claims 3-7, 11-14, and 18-20 have potential allowable subject matter for having the same reasons as those set forth with respect to the claims that they depend from, independent claims 1,8, and 15.
Applicant's arguments filed on 09/24/2025 (related to the 101 Rejection) have been fully considered but they are not persuasive.
Applicant states, on pages 11-16, that Applicant's claimed subject matter of an accurate identification of improvement opportunities in enterprise operations is not directed to "the abstract idea of using a mathematical equation" even though they may have involved mathematical concepts.
Examiner respectfully disagrees with Applicant. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, recommending a set of improvement opportunities in response to detecting a deviation from the benchmark value is considered a social activity (see MPEP 2106.04(a)(2)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Applicant further states, on pages 11-19, that the claims recite a method for recommending improvement opportunities in enterprise operations thereby integrate the exception into practical application. Applicant asserts that integration of judicial exception into the practical application is achieved in terms of an improvement to computing technology (MPEP §§ 2106.04(d)(l) and 2106.05(a)) with the capability of running a real-time search of benchmark value for the performance metrics such as a turnaround time and from various improvements identified, then running a secondary level search on customer satisfaction survey by triggering a string analysis. Also, that the claims recite additional element(s) that amount to significantly more than the judicial exception(s).
Examiner respectfully disagrees with Applicant. The mere nominal recitation of generic computer components does not take the claim out of the certain methods of organizing human activity grouping. The cognitive data analyzer is merely used to analyze the set of performance data with a set of benchmark value (Paragraph 0008). The plurality of source business applications is merely used to obtain current statistics about the type of project of each business (Paragraph 0020). The agility recommender technique is merely used to: compute a deviation identified from the set of performance data compared with the set of benchmark value (Paragraph 0024); and recommend new opportunities with precision (Paragraph 0027). The API is merely used to receive performance data to recommend improvement opportunities associated with the enterprise operations (Paragraph 0022). The main functions of those additional elements are merely used to collect data (e.g., a set of performance data being associated with the enterprise operations such as structured and unstructured data), analyze the data (e.g., analyze the set of performance data with a set of benchmark value, identify a deviation/gap from the benchmark, and determine a set of improvement opportunities), and display certain results of the collection and analysis (e.g., recommend the set of improvement opportunities). Those are functions that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).
The value log builder is merely used to create a value log entry for recommended solution (Paragraph 0020). The feedback analyzer is merely used to receive feedback from the value log user interface for entries in database (Paragraph 0020). The user interface is merely used to receive feedback (Paragraph 0020). The best practice database is merely used to store relevant best practices (Figure 2). The sklearn is merely used to compute a contextual intercept (Paragraph 0024). The Automated Classification machine learning algorithm is merely used to learn suggested keywords (Figure 2). In this case, although the feedback analyzer receives feedback from the user specifying which recommended solutions should be included in the database, the claim and specification do not provide any specific details about how the feedback is analyzed and used to improve the accuracy of the recommendation system. Also, the claim and specification do not provide any specific details of how the machine learning operates or how the suggested keywords are generated (see 2024 AI Guidance, Example 47, claim 2). Further, the step of “dynamically receiving one or more feedback parameters to update the machine learning” is considered a well-understood, routing, and conventional function since it’s just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, these additional elements of “log builder,” “sklearn,” “machine learning’” and “feedback analyzer” are recited at a high level of generality, which results in “apply it” (MPEP 2106.05(f)).
The string analysis function is merely used to trigger a customer satisfaction survey when the aspirational customer satisfaction index value is less than the customer satisfaction index (Paragraph 0023). However, the claim and specification do not provide any specific details of how the customer survey is analyzed and used to generate improvement opportunities, which is merely claiming the idea of a solution or outcome (MPEP 2106.05(a)). The steps of “dynamically updating a benchmark value” and “searching for relevant best practices” are considered a well-understood, routing, and conventional function since they’re just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)).
Lastly, the self-healer processor is merely used to check the availability of bots matching the value of log entries (Paragraph 0020). In this case, the steps of “checking availability” and “deploying bots” do not specify a particular solution to a problem or a particular way to achieve a desired outcome, which is merely claiming the idea of a solution or outcome (MPEP 2106.05(a)).
The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself. Thus, the claim is not patent eligible.
Applicant further states, on pages 16-19, that the claims recite additional element(s) that amount to significantly more than the judicial exception(s). Applicant's claimed invention recites technical advancements in terms of an accurate identification of improvement opportunities which results in better benefits of business operations with greater agility. The system and method of the present disclosure is accurate, time efficient, and scalable for agility operations. A feedback analyzer receiving feedback from the Value Log User Interface and it analyses and searches for similar patterns in the best practices database and if a match found, then the best practice details are passed on to the Value Log builder. The feedback analyzer formulates a new logic based on details in selected best practices database entries and recommends the Value Log builder for change in algorithm. Hence, the presently amended claims amount to significantly more than the abstract idea.
Examiner respectfully disagrees with Applicant. As explained previously, the additional elements are recited at a high level of generality, which result in mere instructions to “apply” on a computer (MPEP 2106.05f). The feedback analyzer is merely used to receive feedback from the value log user interface for entries in database (Paragraph 0020). Although the feedback analyzer receives feedback from the user specifying which recommended solutions should be included in the database, the claim and specification do not provide any specific details about how the feedback is analyzed and used to improve the accuracy of the recommendation system (see 2024 AI Guidance, Example 47, claim 2). Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself.
In the arguments, Applicant states that the feedback analyzer is further used to formulate a new logic based on details in selected best practices database entries and recommends the Value Log builder for change in algorithm. However, the claim and specification do not specify any of those functions (e.g., no details of how the new logic is formulated). Clarification is requested.
Independent claims 8 and 15 recite similar features and therefore are rejected for the same reasons as independent claim 1. Claims 4-7, 11-14, and 18-20 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1, 8, and 15.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-8, 11-15, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more.
Independent Claim 1
Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to a method which is a statutory category.
Step 2A, Prong One - Claim 1 recites: An implemented method to recommend improvement opportunities in enterprise operations, the implemented method comprising: receiving a set of performance data being associated with the enterprise operations, wherein the set of performance data includes a structured data, and an unstructured data, wherein the set of performance data comprises a key performance indicator (KPI), benchmark data, and text data associated with a customer satisfaction survey associated with the enterprise operations and customer escalation, the KPI comprising data related to at least one of IT service management (ITSM) and Business Process Management (BPM); analyzing the set of performance data with a set of benchmark value which is an indicative factor of an enterprise operations agility, wherein analyzing the set of performance data with the set of benchmark value comprises interfacing with a plurality of source business associated with the enterprise operations for obtaining a current statistics about a type of project of each business comprising a project demographics for analyzing the set of performance data, the current statistics about the type of project of each business associated with the plurality of source business applications being used for identifying one or more gaps of business applications; identifying a deviation associated with the analyzed set of performance data by comparing the set of performance data with the set of benchmark value dynamically, compares a turnaround time with the set of benchmark value corresponding to the type of project of each business and performs an another level of validation using the unstructured data for the customer satisfaction survey when an aspirational customer satisfaction index value is less, wherein comprises: computing the contextual factor based on at least one of (i) a plurality of contextual parameters, (ii) a contextual intercept, and (iii) a coefficient of the contextual intercepts, wherein the plurality of contextual parameters are extracted from the set of performance data, wherein the plurality of contextual parameters comprises a team size, a team skill, a line of business, and a technical stack, wherein the plurality of contextual parameters changes according to a type of business associated with the enterprise operations, and wherein the contextual intercept is determined, and computing the affinity factor based on (i) a plurality of affinity parameters, and (ii) a contextual delta; computing a plurality of agility performance parameters, based on the deviation identified from the analyzed set of performance data compared with the set of benchmark value, wherein the analyzed set of performance data are processed, and wherein the plurality of agility performance parameters comprises a contextual factor and an affinity factor; determining a set of improvement opportunities to recommend the enterprise operations based on the plurality of agility performance parameters compared with historical data; generating a plurality of value log entries for the set of improvement opportunities using one or more feedback parameters, wherein receives feedback for entries and processes the received feedback and searches for relevant best practices and passes the relevant best practices details as the one or more feedback parameters to generate the plurality of value log entries for the set of improvement opportunities; and checking availability of one or more bots matching with the plurality of value log entries and deploying at least one bot of the one or more bots for the plurality of value log entries for recommending the set of improvement opportunities in the enterprise operations. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, recommending a set of improvement opportunities in response to detecting a deviation from the benchmark value is considered a social activity (see MPEP 2106.04(a)(2)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: via one or more hardware processors; a cognitive data analyzer; a plurality of source business applications; an agility recommender technique; using a sklearn import linear model having a regression model with trained data; wherein the analyzed set of performance data are processed using the agility recommender technique by calling a plurality of application programming interfaces (API's); a value log builder; a feedback analyzer; a user interface; a best practice database; an Automated Classification machine learning algorithm; and a self-heal processor.
The hardware processor is merely used to: execute instructions; receive a set of performance data being associated with enterprise operations; compute a plurality of agility performance parameters; and determine a set of improvement opportunities to recommend the enterprise operations based on the plurality of agility performance parameters compared with historical data (Paragraph 0008). The cognitive data analyzer is merely used to analyze the set of performance data with a set of benchmark value (Paragraph 0008). The plurality of source business applications is merely used to obtain current statistics about the type of project of each business (Paragraph 0020). The agility recommender technique is merely used to: compute a deviation identified from the set of performance data compared with the set of benchmark value (Paragraph 0024); and recommend new opportunities with precision (Paragraph 0027). The sklearn is merely used to compute a contextual intercept (Paragraph 0024). The API is merely used to receive performance data to recommend improvement opportunities associated with the enterprise operations (Paragraph 0022). The value log builder is merely used to create a value log entry for recommended solution (Paragraph 0020). The feedback analyzer is merely used to receive feedback from the value log user interface for entries in database (Paragraph 0020). The user interface is merely used to receive feedback (Paragraph 0020). The best practice database is merely used to store relevant best practices (Figure 2). The Automated Classification machine learning algorithm is merely used to learn suggested keywords (Figure 2). The self-healer processor is merely used to check availability of bots matching the value of log entries (Paragraph 0020). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “hardware processor,” “cognitive data analyzer,” “plurality of source business applications,” “agility recommender technique,” “sklearn,” “API,” “value log builder,” “feedback analyzer,” “user interface,” “best practice database,” “Automated Classification machine learning algorithm,” and “self-heal processor” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. For the machine learning, the claim and specification do not provide any specific details of how the machine learning operates or how the suggested keywords are generated (see 2024 AI Guidance, Example 47, claim 2). Also, the API is considered “field of use” since it’s just used to receive performance, but does not improve the interface (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of recommending improvement opportunities in enterprise operations. The specification shows that the hardware processor is merely used to: execute instructions; receive a set of performance data being associated with enterprise operations; compute a plurality of agility performance parameters; and determine a set of improvement opportunities to recommend the enterprise operations based on the plurality of agility performance parameters compared with historical data (Paragraph 0008). The cognitive data analyzer is merely used to analyze the set of performance data with a set of benchmark value (Paragraph 0008). The plurality of source business applications is merely used to obtain current statistics about the type of project of each business (Paragraph 0020). The agility recommender technique is merely used to: compute a deviation identified from the set of performance data compared with the set of benchmark value (Paragraph 0024); and recommend new opportunities with precision (Paragraph 0027). The sklearn is merely used to compute a contextual intercept (Paragraph 0024). The API is merely used to receive performance data to recommend improvement opportunities associated with the enterprise operations (Paragraph 0022). The value log builder is merely used to create a value log entry for recommended solution (Paragraph 0020). The feedback analyzer is merely used to receive feedback from the value log user interface for entries in database (Paragraph 0020). The user interface is merely used to receive feedback (Paragraph 0020). The best practice database is merely used to store relevant best practices (Figure 2). The Automated Classification machine learning algorithm is merely used to learn suggested keywords (Figure 2). The self-healer processor is merely used to check availability of bots matching the value of log entries (Paragraph 0020). As previously explained, the additional elements are recited at a high level of generality. For example, the claim and specification do not provide any specific details of how the machine learning operates or how the suggested keywords are generated (see 2024 AI Guidance, Example 47, claim 2). Also, the API is considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Lastly, the step of “identifying a deviation … dynamically” is considered a conventional computer function since it’s just “performing repetitive calculations” (MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Independent claim 8 is directed to an apparatus at step 1, which is a statutory category. Claim 8 recites similar limitations as claim 1 and therefore is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 8 further recites “memory” and “one or more communication interfaces” – which are treated as just an explicit “processor/computer” for executing the operations and are treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, these additional elements of “memory” and “one or more communication interfaces” are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. Thus, the claim is not patent eligible.
Independent claim 15 is directed to an article of manufacture at step 1, which is a statutory category. Claim 15 recites similar limitations as claim 1 and therefore is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 15 further recites “non-transitory machine-readable information storage medium” – which is treated as just an explicit “processor/computer” for executing the operations and is treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, this additional element of “storage medium” is viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. Thus, the claim is not patent eligible.
Dependent claims 4-7, 11-14, and 18-20 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of the abstract idea mentioned above - such as: wherein the plurality of affinity parameters comprises (i) a measurement attribute, and (ii) customer feedback; wherein the measurement attribute is a weighted average of a plurality of performance attributes falling within a predefined range; wherein the plurality of performance attributes comprises of an accuracy, a turnaround time, a productivity, an average handling time, a first pass yield, a first time right, a mean time to resolve, and a resolution time; and herein the contextual delta is computed based on the ratio of deviation identified from the plurality of contextual parameters and the weightage of the plurality of contextual parameters with the sum of weightage of contextual parameters. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to “certain methods of organizing human activity” which include “managing personal behavior.” Also, the additional limitations are directed to the abstract idea of “mathematical concepts” which include “mathematical calculations.” In addition, there are no additional elements to consider at Step 2A Prong 2 and Step 2B. Therefore, the claims still recite an abstract idea that can be grouped into certain methods of organizing human activity.
Potential Allowable Subject Matter
The closest prior art is Viswanath et al. (US 2018/0181898 A1). Ignatyev et al. discloses a processor implemented method to recommend improvement opportunities in enterprise operations, the processor implemented method comprising (Figure 3, item 303, Processor; Paragraph 0021, In some examples, the systems and methods are configured to evaluate the project and/or provide analysis/recommendations for the project in the form of a score or one or more recommendations. In some examples, the systems and methods are configured to provide a pre-mortem (e.g., reviewing the details of why a product or project launch will fail before the product actually launches). Accordingly, the provision of a score, recommendations, and/or the like before and/or during a project are advantageous, in some examples, to correct project planning, staffing, execution or the like to advantageously improve the project before it starts or while it is in process):
receiving, via one or more hardware processors, a set of performance data being associated with the enterprise operations, wherein the set of performance data includes a structured data, and an unstructured data, wherein the set of performance data comprises a key performance indicator (KPI), benchmark data, and text data associated with a customer satisfaction … associated with the enterprise operations and customer escalation, the KPI comprising data related to at least one of IT service management (ITSM) and Business Process Management (BPM) (Figure 3, item 303, Processor; Paragraph 0024, In some embodiments, the systems and methods described herein may be specifically used to benchmark projects related to computer software development; Paragraph 0037, In an example embodiment, statistics 128 comprises statistical or metrics based data related to a code or software release. In some examples, statistics data may be indicative of metrics such as lines of code, hours worked, number of people working on the team, experience level of the team, business value, sum of story points, total time remaining on a sprint, and/or the like. Alternatively or additionally, statistics data may be representative of a real-time or near real-time calculation of the project's velocity (ratio of time spent versus point resolved) and/or project quality (bug fix time versus points solved). Accordingly, training engine 102 may rely on statistics 128 when labeling the data in historical project data store 115 and/or when training benchmarking model 110; Paragraph 0038, In an example embodiment, other data 130 comprises data that is representative of a code or software release. Other data may also be included in some examples that relate to the project that are outside of the project software. For example, other data 130 may be obtained from other systems such as mail programs, calendars, chat tools, databases, slide presentations and/or the like. In other examples, external commentary may be accessed, such as product reviews, comments, likes, etc. In such examples, other data 130 may inform or otherwise be used to label historical project data (e.g., classify or otherwise score a project based on reviews either internally or externally) and/or may be usable when training benchmarking model 110; Paragraph 0039, In some examples, the training engine 102 is configured to access or otherwise ingest historical data or historical project data from historical project data store 115 to train the benchmarking model 110 via supervised or unsupervised computational learning. Additional information regarding the functionality of training engine 102, to include steps related to training benchmarking model 110, is described with respect to at least FIG. 4; Examiner interprets “metrics related to a code or software release” as the “structured data.” Also, “product reviews” as the “unstructured data”);
analyzing, by a cognitive data analyzer via the one or more hardware processors, the set of performance data with a set of benchmark value which is an indicative factor of an enterprise operations agility (Figure 3, item 303, Processor; Paragraph 0039, In some examples, the training engine 102 is configured to access or otherwise ingest historical data or historical project data from historical project data store 115 to train the benchmarking model 110 via supervised or unsupervised computational learning. Additional information regarding the functionality of training engine 102, to include steps related to training benchmarking model 110, is described with respect to at least FIG. 4), wherein analyzing the set of performance data with the set of benchmark value comprises interfacing, via the cognitive data analyzer, with a plurality of source business applications associated with the enterprise operations for obtaining a current statistics about a type of project of each business comprising a project demographics for analyzing the set of performance data (Figure 3, item 303, Processor; Paragraph 0036, In an example embodiment, organization data 126 comprises data indicative of an organization, such as an organizational chart, background on personal, development strategy (e.g., agile, lean, etc.), and/or the like. In some examples, this data is indicative of the number of team members, the backgrounds of the team members, the team leader, and/or the like. In other examples, other data such as organization, location, company profile, and/or the like can be included; Paragraph 0037, In an example embodiment, statistics 128 comprises statistical or metrics based data related to a code or software release. In some examples, statistics data may be indicative of metrics such as lines of code, hours worked, number of people working on the team, experience level of the team, business value, sum of story points, total time remaining on a sprint, and/or the like. Alternatively or additionally, statistics data may be representative of a real-time or near real-time calculation of the project's velocity (ratio of time spent versus point resolved) and/or project quality (bug fix time versus points solved). Accordingly, training engine 102 may rely on statistics 128 when labeling the data in historical project data store 115 and/or when training benchmarking model 110; Examiner interprets “background of the team members and location” as the “demographics”), the current statistics about the type of project of each business associated with the plurality of source business applications being used for identifying one or more gaps of business applications (Paragraph 0025, Advantageously, the score and/or the recommendations are normalized so as to provide a project, team or company score. Accordingly, in some examples, the system is able to compare score across teams, projects, companies, industries and/or the like; Paragraph 0045, he benchmarking model 110 comprises the results of the project evaluation function 104 (e.g., clustering algorithms, classifiers, neural networks, ensemble of trees) in that the benchmarking model 110 is configured or otherwise trained to map an input value or input features to one of a set of predefined output scores or recommendations, and modify or adapt the mapping in response to historical data in the historical project data store 115. As noted herein, the historical project data store 115 contains examples of inputs and/or features and their respective associated scores and/or recommendations; Paragraph 0083, In addition, programming interfaces to the data stored as part of the training engine 102, benchmarking model 110, collaboration tool 202, input data analysis and normalization module 240, benchmarking analytics engine 250, and output module 260, such as by using one or more application programming interfaces can be made available by mechanisms such as through application programming interfaces (API));
identifying, via the one or more hardware processors, a deviation associated with the analyzed set of performance data by comparing the set of performance data with the set of benchmark value dynamically using an agility recommender technique, wherein the cognitive data analyzer compares a turnaround time with the set of benchmark value corresponding to the type of project of each business and … (Figure 3, item 303, Processor; Paragraph 0068, In some examples, the benchmarking analytics engine 250 is configured to apply a trained project evaluation function to the one or more accessed features to identify a set of scores. For example, if the input feature was project velocity, the benchmarking analytics engine 250 may apply the project velocity to the trained project evaluation function to determine whether the project is progressing at fast, slow, or medium pace compared to other projects. In some examples, the project evaluation function would output a suggested score based on other projects that had the same project velocity. Alternatively or additionally, the benchmarking analytics engine 250 may generate one or more recommendations to increase or maximize the analyzed features, such as project velocity. Additional details with regard to the benchmarking analytics engine 250 are recited with respect to FIGS. 8 and 9; Examiner interprets the “project velocity” as the “turnaround time”), wherein the agility recommender technique comprises:
computing the contextual factor based on at least one of (i) a plurality of contextual parameters, (ii) a contextual intercept, and (iii) a coefficient of the contextual intercepts, wherein the plurality of contextual parameters are extracted from the set of performance data, wherein the plurality of contextual parameters comprises a team size, a team skill, a line of business, …, wherein the plurality of contextual parameters changes according to a type of business associated with the enterprise operations, and wherein the contextual intercept is determined using a [learning algorithm] import linear model having a regression model with trained data, and computing the affinity factor based on (i) a plurality of affinity parameters, and (ii) a contextual delta (Paragraph 0037, In some examples, statistics data may be indicative of metrics such as lines of code, hours worked, number of people working on the team, experience level of the team, business value, sum of story points, total time remaining on a sprint, and/or the like; Paragraph 0039, In some examples, the training engine 102 is configured to access or otherwise ingest historical data or historical project data from historical project data store 115 to train the benchmarking model 110 via supervised or unsupervised computational learning. Additional information regarding the functionality of training engine 102, to include steps related to training benchmarking model 110, is described with respect to at least FIG. 4; Paragraph 0040, In some examples, the training engine 102 comprises a normalization module 104. The normalization module 104, in some examples, may be configured to normalize the historical data into project units (e.g., a release, an issue, a logical subsection, a work flow, a portion of a workflow, a bug, a work unit defined by number of hours worked, number of lines of code, outputs, calendar days, and/or the like, and/or the like) so as to enable data to be compared across projects and entities (e.g., so as to provide benchmarking services); Paragraph 0045, he benchmarking model 110 comprises the results of the project evaluation function 104 (e.g., clustering algorithms, classifiers, neural networks, ensemble of trees) in that the benchmarking model 110 is configured or otherwise trained to map an input value or input features to one of a set of predefined output scores or recommendations, and modify or adapt the mapping in response to historical data in the historical project data store 115. As noted herein, the historical project data store 115 contains examples of inputs and/or features and their respective associated scores and/or recommendations; Paragraph 0048, Alternatively or additionally, benchmarking model 110 may be trained to extract one or more features from the historical data using pattern recognition, based on unsupervised learning, supervised learning, semi-supervised learning, reinforcement learning, association rules learning, Bayesian learning, solving for probabilistic graphical models, among other computational intelligence algorithms that may use an interactive process to extract patterns from data; Paragraph 0058, In some examples, this data is indicative of the number of team members, the back grounds of the team members, the team leader, and/or the like. In other examples, other data such as organization, location, company profile, and/or the like can be included; Paragraph 0092, In block 410, training engine 102 and/or benchmarking model 110 are configured to train a project evaluation function stored in the benchmarking model based on the mapped one or more features and the associated score. As described above, the benchmarking model, such as benchmarking model 110, may be trained based on the input features and scores, such that similar input features will be suggestive of a same or similar score; Examiner notes that a plurality of contextual parameters are used to train a learning algorithm to output scores and/or recommendations. Further, Examiner interprets “mapping similar features” as the “affinity factor”);
computing a plurality of agility performance parameters, via the one or more hardware processors, based on the deviation identified from the analyzed set of performance data compared with the set of benchmark value using the agility recommender technique, wherein the analyzed set of performance data are processed using the agility recommender technique by calling a plurality of application programming interfaces (API's), and wherein the plurality of agility performance parameters comprises a contextual factor and an affinity factor (Figure 3, item 303, Processor; Paragraph 0068, n some examples, the benchmarking analytics engine 250 is configured to apply a trained project evaluation function to the one or more accessed features to identify a set of scores. For example, if the input feature was project velocity, the benchmarking analytics engine 250 may apply the project velocity to the trained project evaluation function to determine whether the project is progressing at fast, slow, or medium pace compared to other projects. In some examples, the project evaluation function would output a suggested score based on other projects that had the same project velocity. Alternatively or additionally, the benchmarking analytics engine 250 may generate one or more recommendations to increase or maximize the analyzed features, such as project velocity. Additional details with regard to the benchmarking analytics engine 250 are recited with respect to FIGS. 8 and 9; Paragraph 0083, In addition, programming interfaces to the data stored as part of the training engine 102, benchmarking model 110, collaboration tool 202, input data analysis and normalization module 240, benchmarking analytics engine 250, and output module 260, such as by using one or more application programming interfaces can be made available by mechanisms such as through application programming interfaces (API); libraries for accessing files, databases, or other data repositories; through scripting languages such as XML; or through Web servers, FTP servers, or other types of servers providing access to stored data. The historical project data store 115 and input data 230 may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques. Alternatively or additionally, the historical project data store 115 and input data 230 may be local data stores but may also be configured to access data from the remote services 360; Paragraph 0092, As described above, the benchmarking model, such as benchmarking model 110, may be trained based on the input features and scores, such that similar input features will be suggestive of a same or similar score; Examiner interprets “similar input features” as the “affinity factor”);
determining, via the one or more hardware processors, a set of improvement opportunities to recommend the enterprise operations based on the plurality of agility performance parameters compared with historical data (Figure 3, item 303, Processor; Paragraph 0068, n some examples, the benchmarking analytics engine 250 is configured to apply a trained project evaluation function to the one or more accessed features to identify a set of scores. For example, if the input feature was project velocity, the benchmarking analytics engine 250 may apply the project velocity to the trained project evaluation function to determine whether the project is progressing at fast, slow, or medium pace compared to other projects. In some examples, the project evaluation function would output a suggested score based on other projects that had the same project velocity. Alternatively or additionally, the benchmarking analytics engine 250 may generate one or more recommendations to increase or maximize the analyzed features, such as project velocity. Additional details with regard to the benchmarking analytics engine 250 are recited with respect to FIGS. 8 and 9);
generating, by a value log builder via the one or more hardware processors, a plurality of value log entries for the set of improvement opportunities using one or more processed feedback parameters, the one or more processed feedback parameters being obtained from a feedback analyzer, the plurality of value log entries being generated using an Automated Classification machine learning algorithm, wherein the feedback analyzer receives feedback … (Figure 3, item 303, Processor; Paragraph 0046, In some embodiments, the project evaluation function maps the labeled data representing historical projects, such as project statistics, project bugs, project releases, project documentations, and organization data, to one or more scores or recommendations. Alternatively or additionally, the benchmarking model 110 may be trained so as to score or otherwise label the historical project data in historical project store 115. For example, based on the data and the labels, the benchmarking model 110 may be trained so as to generate a score or additional label for the one or more project units; Paragraph 0093, In block 412, training engine 102 and/or benchmarking model 110 are configured to map the one or more features to an associated one or more recommendations. In some examples, the training engine 102 and/or benchmarking model 110 is configured to receive user input, crowd source input or the like to attach a recommendation to the one or more features. In some examples, this input represents a user's evaluation of the one or more features. Alternatively or additionally, the system may classify, using clustering a function or the like the input features and determine a recommendation or a suggested recommendation. In an instance in which a suggested recommendation is provided, the suggested recommendation may be verified and/or otherwise confirmed by a user, a crowd source, or the like; Paragraph 0094, In block 414, training engine 102 and/or benchmarking model 110 are configured to train a project evaluation function stored in the benchmarking model based on the mapped one or more features and the associated one or more recommendations. As described above, the benchmarking model may be trained based on the input features and recommendation, such that similar input features will be suggestive of a same or similar recommendation. In some cases, the learning and/or training may be accomplished based on supervised or unsupervised computation learning; Examiner interprets “recommendations stored in the data store” as the “value log builder”); and ...
Although Viswanath et al. discloses receiving a set of performance data being associated with the enterprise operations, wherein the set of performance data includes a structured data (e.g., metrics such as lines of code) and unstructured data (e.g., product reviews and comments), Viswanath et al. does not specifically disclose wherein the text data is associated with a customer satisfaction survey.
Basu et al. (US 11,087,261 B1). Basu et al. discloses a processor implemented method to recommend improvement opportunities in enterprise operations, the processor implemented method comprising (Column 2, lines 66-67 & Column 3, lines 1-13, The methods can be embodied in executable instructions, processors or systems to execute such instructions; Column 3, lines 14-25, In an embodiment, the present system predicts problems that may occur, providing an indication of both the nature of the problem and when it is expected to occur. The problems may be expressed as deviations in performance indicators that violate business criteria. For example, a problem may be expressed as the value of a performance indicator crossing a threshold. In addition, the present system supports testing of solutions to the predicted problem. The solutions are expressed in terms of what action to take and when to take such action. As such, the present system assists with determining a desirable set of future actions to maintain a business process in compliance with business criteria):
receiving, via one or more hardware processors, a set of performance data being associated with the enterprise operations, wherein the set of performance data includes a structured data, and an unstructured data, wherein the set of performance data comprises a key performance indicator (KPI), benchmark data, and text data associated with a customer satisfaction survey associated with the enterprise operations and customer escalation, the KPI comprising data related to at least one of IT service management (ITSM) and Business Process Management (BPM) (Figure 1, item 1180, Processor; Column 1, lines 44-49, Such PI values are useful in business processes; Column 4, lines 52-62, The present system can acquire data, as illustrated at 2602, from a variety of sources. The data can be acquired from external sources. As discussed in more detail below, exemplary external sources include databases, customer service logs, surveys, testing, or any combination thereof, among others. In particular, the data can be derived from structured sources. In another example, the data can be derived from unstructured sources. The data can be transformed and aggregated. In addition, the data can be cleaned. The resulting data can be stored in a data management system; Column 5, lines 5-13, Once clean aggregated data is available, relationships between performance indicators and potential influencers can be determined and criteria for performance can be established, as illustrated at 2604. Such relationships permit projection of potential outcomes, which can be compared with the criteria to determine whether the business process is functioning well. In particular, the relationships can identify influencers that have a greater influence on one or more performance indicators);
analyzing, by a cognitive data analyzer via the one or more hardware processors, the set of performance data with a set of benchmark value which is an indicative factor of an enterprise operations agility (Figure 1, item 1180, Processor; Column 5, lines 5-13, Once clean aggregated data is available, relationships between performance indicators and potential influencers can be determined and criteria for performance can be established, as illustrated at 2604. Such relationships permit projection of potential outcomes, which can be compared with the criteria to determine whether the business process is functioning well. In particular, the relationships can identify influencers that have a greater influence on one or more performance indicators; Examiner interprets the business criteria as the benchmark value); …
identifying, via the one or more hardware processors, a deviation associated with the analyzed set of performance data by comparing the set of performance data with the set of benchmark value dynamically using an agility recommender technique, wherein the cognitive data analyzer compares a turnaround time with the set of benchmark value corresponding to the type of project of each business and … (Figure 1, item 1180, Processor; Column 1, lines 35-39, Global Fortune 1000 corporations and many smaller businesses manage their business processes by carefully monitoring a set of performance indicators (PIs) or metrics, at different time intervals (from near-real-time to daily/weekly/monthly/quarterly/etc., d