DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d) with respect to Indian parent Application No. IN202241030167 filed on 5/26/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 8/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Notice to Applicant
Claims 1-5, 9-15, 18 and 20 are presently amended.
Claims 1-20 are pending.
Response to Amendment
Applicant’s amendments are acknowledged.
Response to Arguments
Applicant' s arguments filed 8/6/2025 have been fully considered in view of further consideration of statutory law, Office policy, precedential common law, and the cited prior art as necessitated by the amendments to the claims, and are persuasive in-part for the reasons set forth below.
Claim Interpretation
First, Applicant argues that “Applicant has amended the claims such that the said terms are no longer used. That is, Applicant respectfully submits that claim 1 has been suitably amended to remove the term "risk estimation engine". Similarly, claims 2-3 have been amended to remove the term "unstructured data analysis unit". Claim 9 has been amended to remove the term "data pre-processing unit". Claim 10 has been amended to remove the term "correlation unit". Claims 11-14 have been amended to remove the term "predictability unit". Claim 14 has been amended to remove the term "risk estimation unit". As such, Applicant respectfully requests the Examiner that the presumption that the claim limitation is interpreted under 35 U.S.C.112 (f) is waived off.” [Arguments, pages 11-12].
In response, Applicant’s arguments are considered and are persuasive. Examiner observes that the presently amended claims do not necessitate a 35 U.S.C.112 (f) claim interpretation.
35 USC § 101 Rejections
First, Applicant argues that “Applicant submits that claimed system is implemented by the claimed processor executing program instructions stored in the claimed memory, and is in effect, a special purpose computer limited to the use of the particularly claimed combination of elements performing the particularly claimed combination of functions. In fact, the claimed processor and memory are central to the invention which has been specifically programmed to give effect to the specific claimed functions, as recited in amended claim 1. The claimed processor and memory, therefore, cannot be said to be a generic computer carrying out well-known, routine and conventional functions…
…this systematic approach enables real-time monitoring and minimization of risks, thereby enhancing project performance and providing a concrete technical solution to the challenges inherent in existing software project management systems as mentioned herein below.
It is submitted that the claimed invention provides a technical solution to the technical problem of static nature and design limitations in agile project management systems that fail to showcase root causes of failure…
Amended claim 1 is not directed to a mere abstract idea, but rather to a patent- eligible technological improvement in the field of predictive risk assessment in software development lifecycle projects. The claim as a whole recites the acts of transformation of unstructured project data into actionable risk indicators. Specifically, amended claim 1 requires the processor to fetch historical unstructured dataset relating to work items documented for past agile projects… Applicant submit that this sequential and systematic approach minimizes risks and improves overall performance in software development lifecycle of projects in real-time… Applicant submits that these operations are computationally intensive and data-driven, requiring the integration of multiple machine learning and data processing techniques and cannot be performed by a human…
As such, Applicant submits that the claimed features are a specific technical process executed by the processor of the claimed system and is not a generic computer function, nor is it a mere mathematical concept… and in fact, the claimed invention recites features that are significantly more and tied to a practical application of specific improvement in the field of software project management…” [Arguments, pages 12-20].
In response, Applicant’s arguments are considered but are not persuasive. Examiner respectfully disagrees and maintains that the present claims recite a judicial exception without significantly more. With regard to the assertion that the claimed invention recites features that are significantly more and tied to a practical application of specific improvement in the field of software project management, Examiner respectfully disagrees. In particular, and Examiner observes that the present invention, when considered as a whole and in context of the additional elements, fails to demonstrate a practical application. Specifically, independent claims 1, 15 and 20 only recite the following additional elements –
A system… the system comprising: a memory storing program instructions; a processor executing program instructions stored in the memory and configured to…; …train a supervised learning model… [Claim 1],
… a processor in communication with a memory…; …train a supervised learning model… [Claim 15],
… A computer program product comprising: a non-transitory computer-readable medium having computer program code stored thereon, the computer-readable program code comprising instructions that, when executed by a processor, causes the processor to…; …train a supervised learning model… [Claim 20].
Examiner respectfully observes that the generically claimed computer components and supervised learning model are recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example:
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
Furthermore, the computer implemented element is considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), like the following MPEP example:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
Accordingly, these additional elements are not considered sufficient to demonstrate an integration of the abstract idea into a practical application. The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application.
Similarly, with regard to the assertion that the claims demonstrate a technological improvement, Examiner respectfully maintains that the claimed additional elements amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)).
Further still, and with respect to the argument that these operations are computationally intensive and data-driven, requiring the integration of multiple machine learning and data processing techniques and cannot be performed by a human, Examiner observes that these factors are considerations for determining whether or not a claim recites an abstract idea in the grouping of ‘mental processes’. Instead, Examiner respectfully maintains that the present claims recite certain methods of organizing human activity, and in particular, concepts relating to fundamental economic principles or practices. The limitations of the present claims describe steps for fundamental economic principles or practices, which includes hedging, insurance and mitigating risk. Specifically, performing an optimized predictive risk assessment of software development lifecycle of projects is considered to describe steps for mitigating risk. As such, claims 1, 15 and 20 recite concepts identified as abstract ideas. Thus, these claims are not patent eligible. As such, Examiner remains unpersuaded.
35 USC § 103 Rejections
First, Applicant argues that “…Amended claim 1 requires, among other things, fetching a historical unstructured attribute dataset relating to work items from past agile projects, grouping this unstructured data based on derived KPI scores, and converting it into a structured attribute dataset using pre-defined rules. Fox, in contrast, describes a dashboard system that collects and aggregates structured data from various source systems and generates analytics using business formulae (support for the same may be found at para [0012] of Fox), and does not address the acquisition, analysis, or transformation of unstructured attribute datasets as required by amended claim 1.
Furthermore, Fox does not disclose any process for deriving KPI scores from unstructured data or grouping data based on such scores. The analytics engine and agile module as disclosed in Fox operate solely on structured, predefined business metrics and do not perform technique for extracting KPI scores from unstructured sources…” [Arguments, pages 20-22].
In response, Applicant’s arguments are considered but are not persuasive. Examiner respectfully disagrees and directs the Applicant to (Fox, ¶ 1, The present disclosure relates to a business intelligence dashboard that focuses on software development performance. More specifically, the present disclosure relates to a system and method for an enterprise software development dashboard tool that provides actionable intelligence to various levels of stakeholders within an organisation as an enterprise monitoring tool, while continuing with process adherence and quality standards), (Id., ¶ 12, The system comprises a processor, a memory operatively coupled to the processor, and a connector component configured to receive data from business attributes of at least one source program and an analytical component configured to generate an analytics from the received data. Further, the analytical component comprises an analytics engine to track the at least one source program status using quality parameters, an agile module to provide continuous improvement steps to the business attributes using an agile maturity index score, wherein business formulae are applied to generate the analytics from the received data by the analytical component, and wherein the agile maturity index score is calculated for each of the business attributes by the analytical component using the generated analytics), (Id., ¶ 58, The solution provides an enterprise dashboard for tracking software development progress and risks that facilitates transparency for all stakeholders in predictability, productivity, quality & agile maturity by organization levels, offers return on investment (ROI) and other business metrics to engage all stakeholders in technology investments. It also gives filtered metrics across organization levels such as enterprise, business unit, program, project, team and individual (in some cases) and that too in an integrated view mode in code quality, build quality and operational (ASM) metrics), (Id., ¶ 81, The agile module 210 of the analytical component 112 follows the agile approach of dividing tasks into short phases of work and frequently reassessing and adapting new approaches in order to ensure quality deliverables in a timely manner. The agile module 210 performs its task by overlooking activities and providing insights on the same. The activities include but may not be limited to capturing requirements, constructing or initiating work, reviewing the constructed work, deploying the same, testing the work for errors, releasing the work to the next level of stakeholders and eventually managing the work. Further, the agile module 210 manages agility by doing and being agile through various agile coaches including but not limited to agile documentation (discloses work items documented for past agile projects), collaborative planning, continuous integration and delivery, low process ceremony, paired programming, agile estimation, continuous improvement, evolutionary requirements, multi-level adaptive planning and self-organizing team. Additionally, the agile module also calculates an agile maturity index scores for enhancing performance), (Id., ¶ 83, The vendor module 212 of the analytical component 112 assists in keeping track of vendor records and productivities, especially for outsourced work, this further assist in keeping key performance indicators (KPI) of a vendor or a consultant. The KPI generally include productivity, efficiency, quality or other performance indicators as requisite for a particular business. The vendor module 212 comprised in the analytical component 112 calculates vendor productivity on the basis of multiple factors including but not limited to comparative analysis at the vendor and consultant level, story point analysis, summary analysis, resource productivity, and predictability including cost, time and scope variants. Predictability may also be calculated on the basis of a schedule performance index and a cost performance index).
Here, Fox discloses a system for gathering historical unstructured attribute datasets using an agile module in order to group the datasets based on derived KPI (i.e. metrics) scores, in accordance with the present invention. Thus, Examiner respectfully maintains that the Fox reference renders the above-argued elements of the claim obvious. As such, Examiner remains unpersuaded.
Second, Applicant argues that “…Amended claim 1 requires correlating a derived attribute data from the structured attribute dataset to derive an accuracy percentage as a risk indicator, and creating a decision tree structure using the structured attribute dataset (derived from unstructured data) to train a supervised learning model for predicting spillover risk values and defect density. Fox does not disclose or suggest any such correlation, nor does it describe the use of decision tree models, supervised learning, or any machine learning technique applied to structured data derived from unstructured sources. Fox’s analytics are generated using business formulae and do not involve the training or application of machine learning models for predictive risk assessment as claimed in amended claim 1…” [Arguments, page 22].
In response, Applicant’s arguments are considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner formulates a new rejection as detailed below. As such, Examiner remains unpersuaded.
Third, Applicant argues that “…amended claim 1 recites combining KPI scores, accuracy percentages, spillover risk values, and defect density values for risk assessment in the software development lifecycle to generate indicators of risk, and employing these indicators for real-time predictive risk assessment to monitor and minimize risks and improve overall performance in software development cycle in real-time. Fox fails to disclose or suggest any such combination of indicators, nor does it employ the outputs of machine learning models trained on structured data derived from unstructured sources for real-time predictive risk assessment. Fox’s system is limited to the aggregation and display of structured metrics and does not provide the claimed real-time, machine learning-based risk monitoring and mitigation functionality, as required by amended claim 1…” [Arguments, page 22].
In response, Applicant’s arguments are considered but are not persuasive. Examiner respectfully disagrees and directs the Applicant to Id., ¶ 53, The KPIs to measure vendor performance may include factors such as cost, on-time delivery, code quality, and software performance. Most of these KPIs may be tangible and directly related to the vendor. The cost component may be measured against the expected cost for performing similar function internally within the software development company. Cost overruns may be a factor when measuring this KPI. On-time delivery may be measured against specified due dates. A good measure of code quality may be the number of bugs in the developed code. Finally, software performance may be measured against metrics such as computational performance, load time, scalability, and performance as perceived by the users), (Id., ¶ 29, The task of Quality Management 225 may present a good candidate for outsourcing for some software developing companies. An iterative process, Quality Management 225 may be closely aligned to Code Development 220. Using the product requirements document prepared by Product Management 215, Quality Management 225 may ensure that the features specified by Product Management 215 are correctly implemented in the final product. The performance of Quality Management 225, and implicitly of Product Development 220, may be easily determined in some software developing companies by remotely monitoring product quality. Thus, the company may have an incentive to carefully monitor the reports prepared by Quality Management 225 to reduce, or even minimize, the risks associated with outsourcing Quality Management 225. Therefore, despite depending on Product Management 215, Quality Management 225 may be outsourced. Some software developing companies may find it advantageous to outsource Quality Management 225 and Coding 220 to the same vendor. Doing so may allow Quality Management 225 to perform its task more efficiently by using the knowledge gained by Coding 220 of the developed software), (Id., ¶ 20, As shown in the exemplary flow chart of FIG. 1, step 120 includes identifying one or more tasks to outsource. These tasks may include the tasks identified with software development, such as pre-sales, sales, product marketing, product management, product development, quality management, product support, consulting, and documentation. The software development company may decide to outsource tasks in a manner that minimizes the disruption to the business process. To avoid disruptions to the business process, the software development company can consider the adverse effects that outsourcing different tasks may have on the base criteria identified in step 110. These adverse effects may be grouped into at least three sets of risks--strategic risks, operational risks, and demographic risks. (discloses indicators of risk) It will be understood that the set of risks and the risks in each group are not exhaustive and should be enhanced as required), (Id., ¶ 30, The software development company may have as a goal the awareness of customer issues as well as the vendor's response to these issues at all times. To achieve this goal, the software development company may implement a well-defined escalation process in which the software development company itself may become involved with customer problems).
Here, Nagar discloses a system intended to monitor risks and improve overall performance of software development lifecycle projects at least by implementing NLP and supervised learning models to assist developers with tools and by tracking metrics at all times. Thus, Examiner respectfully maintains that Nagar renders the above-argued limitation obvious. As such, Examiner remains unpersuaded.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1: Claims 1-20 are directed to statutory categories, namely a machine (claims 1-14), a process (claims 15-19) and an article of manufacture (claim 20).
Step 2A, Prong 1: Claims 1, 15 and 20 in part, recite the following abstract idea:
… for optimized predictive risk assessment of software development lifecycle of projects… fetch a historical unstructured attribute dataset relating to work items documented for past agile projects and group the unstructured attribute dataset based on derived Knowledge Performance Indicator (KPI) scores; convert the unstructured attribute dataset into a structured attribute dataset by applying pre-defined rules, wherein each attribute data of the structured attribute dataset is mapped to pre-determined categorical values; correlate a derived attribute data from the structured attribute dataset with a defined attribute data to derive an accuracy percentage, wherein the accuracy percentage signifies a potential risk to subsequent tasks in the software development lifecycle of projects; create a decision tree structure using the structured attribute dataset to… and predict spillover risk values, wherein a maximum information retention is determined for predicting the spillover risk values that are obtained when a branching node is created for the structured attribute data; apply an iterative logic to predict defect density values based on the structured attribute dataset; and combine the KPI scores, the accuracy percentage and the spillover risk values and the defect density values for risk assessment in the software development lifecycle of projects to generate indicators of risks, wherein the system employs the generated indicators of risks for predictive risk assessment to monitor and minimise risks and improve overall performance of software development lifecycle projects in real-time [Claim 1],
A method for optimized predictive risk assessment of software development lifecycle of projects, wherein the method is executed by…, the method comprising: fetching a historal unstructured attribute dataset relating to work items documented for past agile projects and group the unstructured attribute dataset based on derived Knowledge Performance Indicator (KPI) scores to create a grouped attribute dataset; converting the unstructured attribute dataset into a structured attribute dataset by applying pre-defined rules, wherein each attribute data of the structured attribute dataset is mapped to pre-determined categorical values; correlating a derived attribute data from the structured attribute dataset with a defined attribute data to derive an accuracy percentage, wherein the accuracy percentage signifies a potential risk to subsequent tasks in the software development lifecycle of projects; creating a decision tree structure using the structured attribute dataset to… and predict spillover risk values, wherein a maximum information retention is determined for predicting the spillover risk values that are obtained when a branching node is created for the structured attribute data; applying an iterative logic to predict defect density values based on the structured attribute dataset; and combining the KPI scores, the accuracy percentage and the spillover risk values and defect density values for risk assessment in the software development lifecycle of projects to generate indicators of risks, wherein the system employs the generated indicators of risks for predictive risk assessment to monitor and minimise risks and improve overall performance of software development lifecycle projects in real-time [Claim 15],
…fetch a historical unstructured attribute dataset relating to work items documented for past agile projects and group the unstructured attribute dataset based on derived Knowledge Performance Indicator (KPI) scores; convert the unstructured attribute dataset into a structured attribute dataset by applying pre-defined rules, wherein each attribute data of the structured attribute dataset is mapped to pre-determined categorical values; correlate a derived attribute data from the structured attribute dataset with a defined attribute data to derive an accuracy percentage, wherein the accuracy percentage signifies a potential risk to subsequent tasks in the software development lifecycle of projects; create a decision tree structure using the structured attribute dataset to… and predict spillover risk values, wherein a maximum information retention is determined for predicting the spillover risk values that are obtained when a branching node is created for the structured attribute data; apply an iterative logic to predict defect density values based on the structured attribute dataset; and combine the KPI scores, the accuracy percentage and the spillover risk values and defect density values for risk assessment in the software development lifecycle of projects to generate indicators of risks, wherein the system employs the generated indicators of risks for predictive risk assessment to monitor and minimise risks and improve overall performance of software development lifecycle projects in real-time [Claim 20].
These concepts are not meaningfully different than the following concepts identified by the MPEP:
Concepts relating to fundamental economic principles or practices. The aforementioned limitations describe steps for fundamental economic principles or practices, which includes hedging, insurance and mitigating risk. Specifically, performing an optimized predictive risk assessment of software development lifecycle of projects is considered to describe steps for mitigating risk. As such, claims 1, 15 and 20 recite concepts identified as abstract ideas.
The dependent claims recite limitations relative to the independent claims, including, for example:
…configured to group the unstructured attribute dataset to create a grouped attribute dataset including a positive, a negative, a mixed and a neutral grouped sentiment dataset, the grouping is carried out by employing a sequence of computational linguistics techniques including stemming followed by tokenization on the unstructured attribute dataset comprising communication threads to create the grouped attribute dataset, and wherein the … is configured to detect risks and initiate corrective actions on instances of a continuous non-positive grouped attribute outputs from a set of tasks for deriving the KPI scores, and wherein … performs analysis of the KPI score using…, the communication thread is broken down into sub-component parts and the parts are individually validated to identify sentiment bearing phrases through word associations, the KPI score is assigned to each phrase in the sub-component parts such that the KPI score is proportional to a degree to which sentiment is expressed [Claim 2],
…performs analysis of the KPI score to determine if the communication threads have KPI scores across multiple sentiments, and groups the unstructured attribute dataset comprising such communication threads as the mixed grouped sentiment dataset, and wherein … performs analysis of the KPI score and groups the unstructured attribute dataset comprising communication threads as a neutral grouped sentiment dataset in the event the KPI scores are determined to be low [Claim 3],
…wherein the structured attribute dataset comprises a first attribute data representing unique requirement identifiers assigned by … corresponding to the software development lifecycle of projects, and wherein the structured attribute dataset comprises a second attribute data representing base requirements provided by a team which is split at a functional or a technical level, and wherein the base requirements include modules and segments associated with user interface, database connectivity, error control, reporting that are required to build a software system corresponding to the software development lifecycle of projects [Claim 4],
…wherein the structured attribute dataset comprises a third attribute data representing a count of number of days for completing technical or functional requirements in a work item, the third attribute is scaled in between a value of 1-8 days, and wherein the structured attribute dataset comprises a fourth attribute data representing an urgency of the work items, sequence in which tasks are to be accomplished and a level of significance of the work items, the fourth attribute data may be represented as a major, a medium, a minor and a rush in a scale, wherein rush represents an ad-hoc requirement. [Claim 5],
wherein the structured attribute dataset comprises a fifth attribute data representing a count of people that have worked or are working on a work item, wherein with increase in complexity in the software development lifecycle of projects a count of the fifth attribute data increases, and wherein the structured attribute dataset comprises a sixth attribute data representing a length of a field … comprising textual description of technical and functional requirements in words and characters [Claim 6],
The limitations of these dependent claims are merely narrowing the abstract idea identified in the independent claims, and thus, the dependent claims also recite abstract ideas.
Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims 1, 15 and 20 only recite the following additional elements –
A system… the system comprising: a memory storing program instructions; a processor executing program instructions stored in the memory and configured to…; …train a supervised learning model… [Claim 1],
… a processor in communication with a memory…; …train a supervised learning model… [Claim 15],
… A computer program product comprising: a non-transitory computer-readable medium having computer program code stored thereon, the computer-readable program code comprising instructions that, when executed by a processor, causes the processor to…; …train a supervised learning model… [Claim 20].
The apparatus, processor, memory, supervised learning model and executable instructions are recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example:
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
Furthermore, the computer implemented element is considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), like the following MPEP example:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
Accordingly, these additional elements do not integrate the abstract idea into a practical application.
The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application.
Step 2B: Claims 1, 15 and 20 and their underlying limitations, steps, features and terms, considered both individually and as a whole, do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the following reasons:
Independent claims 1, 15 and 20 only recite the following additional elements –
A system… the system comprising: a memory storing program instructions; a processor executing program instructions stored in the memory and configured to…; …train a supervised learning model… [Claim 1],
… a processor in communication with a memory…; …train a supervised learning model… [Claim 15],
… A computer program product comprising: a non-transitory computer-readable medium having computer program code stored thereon, the computer-readable program code comprising instructions that, when executed by a processor, causes the processor to…; …train a supervised learning model… [Claim 20].
As such, both individually or in combination, these limitations do not add significantly more to the judicial exception.
The remaining dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the dependent claims do not recite any new additional elements other than those mentioned in the independent claims, which amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). As such, these claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fox et al., U.S. Publication No. 2022/0051162 [hereinafter Fox] in view of Nagar, U.S. Publication No. 2008/0086354 [hereinafter Nagar] and further in view of Muddakkagari et al., U.S. Publication No. 2020/0241872 [hereinafter Muddakkagari].
Regarding Claim 1, Fox discloses … A system for optimized predictive risk assessment of software development lifecycle of projects, the system comprising: a memory storing program instructions; a processor executing program instructions stored in the memory and configured to: fetch a historical unstructured attribute dataset relating to work items documents for past agile projects and group the unstructured attribute dataset based on derived Knowledge Performance Indicator (KPI) scores (Fox, ¶ 1, The present disclosure relates to a business intelligence dashboard that focuses on software development performance. More specifically, the present disclosure relates to a system and method for an enterprise software development dashboard tool that provides actionable intelligence to various levels of stakeholders within an organisation as an enterprise monitoring tool, while continuing with process adherence and quality standards), (Id., ¶ 12, The system comprises a processor, a memory operatively coupled to the processor, and a connector component configured to receive data from business attributes of at least one source program and an analytical component configured to generate an analytics from the received data. Further, the analytical component comprises an analytics engine to track the at least one source program status using quality parameters, an agile module to provide continuous improvement steps to the business attributes using an agile maturity index score, wherein business formulae are applied to generate the analytics from the received data by the analytical component, and wherein the agile maturity index score is calculated for each of the business attributes by the analytical component using the generated analytics), (Id., ¶ 58, The solution provides an enterprise dashboard for tracking software development progress and risks that facilitates transparency for all stakeholders in predictability, productivity, quality & agile maturity by organization levels, offers return on investment (ROI) and other business metrics to engage all stakeholders in technology investments. It also gives filtered metrics across organization levels such as enterprise, business unit, program, project, team and individual (in some cases) and that too in an integrated view mode in code quality, build quality and operational (ASM) metrics), (Id., ¶ 81, The agile module 210 of the analytical component 112 follows the agile approach of dividing tasks into short phases of work and frequently reassessing and adapting new approaches in order to ensure quality deliverables in a timely manner. The agile module 210 performs its task by overlooking activities and providing insights on the same. The activities include but may not be limited to capturing requirements, constructing or initiating work, reviewing the constructed work, deploying the same, testing the work for errors, releasing the work to the next level of stakeholders and eventually managing the work. Further, the agile module 210 manages agility by doing and being agile through various agile coaches including but not limited to agile documentation (discloses work items documented for past agile projects), collaborative planning, continuous integration and delivery, low process ceremony, paired programming, agile estimation, continuous improvement, evolutionary requirements, multi-level adaptive planning and self-organizing team. Additionally, the agile module also calculates an agile maturity index scores for enhancing performance), (Id., ¶ 83, The vendor module 212 of the analytical component 112 assists in keeping track of vendor records and productivities, especially for outsourced work, this further assist in keeping key performance indicators (KPI) of a vendor or a consultant. The KPI generally include productivity, efficiency, quality or other performance indicators as requisite for a particular business. The vendor module 212 comprised in the analytical component 112 calculates vendor productivity on the basis of multiple factors including but not limited to comparative analysis at the vendor and consultant level, story point analysis, summary analysis, resource productivity, and predictability including cost, time and scope variants. Predictability may also be calculated on the basis of a schedule performance index and a cost performance index);
convert the unstructured attribute dataset into a structured attribute dataset by applying pre-defined rules, wherein each attribute data of the structured attribute dataset is mapped to pre-determined categorical value (Id., ¶ 73, A dashboard framework system 104 implements a method for dashboard framework 200 (discloses structured attribute dataset) on a web and application server 102 or a web server 102, the dashboard framework system 104 also known as web and application system 104 includes a display component 110 or a dashboard interface, a connector agent 202, an analytical component 112 or a web API, and other components 114. The display component 110 allows the system 104 to interact with a user 150. The analytical component 112 comprises an agile module 210, a vendor module 212, a test case module 214, an analytics engine 216, and other modules 220. The analytical component 112 communicates with a connector component 204 having a connector service 116 and further connected to a plurality of sub-connectors 204a, 204b, 204c, 204d, . . . , 204n, such as a JIRA connector, a Jenkins connector, a SonarQube connector, a ServiceNow connector, an Excel connector, and alike for each program, project, or such source systems/applications 130. Each of the sub-connectors 204a, 204b, 204c, 204d, . . . , 204n are respectively coupled to one of a plurality of third party program, project, or such source systems 130 or applications, such as 132a, 132b, 132c, 132d, . . . , 132n, such a JIRA, a Jenkins, a SonarQube, a ServiceNow, an Excel file programs. Further, the components of the system 104 communicates to other computing devices, ex web servers and external data servers, such as a DB server 120 having a database system 122 and a database 124), (Id., ¶ 74, The enterprise software development dashboard tool of the solution provides the connector component 204, the analytical component 112 and the display component 110. Additionally, the tool also comprises a server component 122 of the DB server 120 which remains in constant communication with the analytical component 112. The connector component 204 of the dashboard framework system may be incorporated to gather data from multiple tools, systems and applications 130 in a way that the data is gathered via a predefined function initiated in a specified time period. The analytical component 112 may be instructed to receive the data from the connector component 204, applying business formulae (discloses pre-defined rules) onto the data and deriving useful analytics, eventually supplementing said data to the display component 110. The analytical component 112 may further be facilitated to perform analytics in order to derive descriptive analytics, predictive analytics and prescriptive analytics from the data supplemented to the analytical component 112 from the connector sub-connectors 204a, 204b, . . . , 204n. The analytical component 112 may further derive analytics from the supplemented data mainly for the attributes including but not limited to predictability, productivity, quality, maturity and return-on-investment (discloses mapping to pre-determined categorical values). The display component 110 of the software development dashboard system may be developed to display the analytics supplemented by the analytical component in a flexible and user-friendly manner), (Id., ¶ 93, FIG. 3.2 illustrates an exemplary dashboard report 300 generated by a dashboard framework system, in accordance with an embodiment of the present subject matter);
PNG
media_image1.png
366
539
media_image1.png
Greyscale
apply an iterative logic to predict defect density values based on the structured attribute dataset (Id., ¶ 84, The test case module 214 is a review module which reviews the work being performed by individuals in an organization. The test case module 214 of the analytical component 112 identifies test case scenarios based on multiple parameters including but not limited to coverage percentage, completion percentage, executed pass percentage and executed failure percentage which assist in analysing the performance of a project or program as being developed or worked upon (discloses defect density values)), (Id., ¶ 7, agile software development processes are also added that further encourages flexible planning and low reaction time to changing requirements. The agile method divides jobs into small, incremental steps that may be completed with little forethought. Software iterations are created in a short amount of time. Consolidating and enabling project management, product management, customer management, and other stakeholders in a single environment or tool is required by project management systems that have been built to arrange these operations. They are typically unable to communicate with other third-party technologies or collaborate across many development teams or geographically dispersed locales);
While suggested in at least Fig. 2 and related text, Fox does not explicitly disclose …correlate a derived attribute data from the structured attribute dataset with a defined attribute data to derive an accuracy percentage, wherein the accuracy percentage signifies a potential risk to subsequent tasks in the software development lifecycle of projects; create a decision tree structure using the structured attribute dataset to train a supervised learning model and predict spillover risk values, wherein a maximum information retention is determined for predicting the spillover risk values that are obtained when a branching node is created for the structured attribute data; and combine the KPI scores, the accuracy percentage and the spillover risk values and the defect density values for risk assessment in the software development lifecycle of projects to generate indicators of risks, wherein the system employs the generated indicators of risks for predictive risk assessment to monitor and minimise risks and improve overall performance of software development lifecycle projects in real-time
However, Nagar discloses …correlate a derived attribute data from the structured attribute dataset with a defined attribute data to derive an accuracy percentage, wherein the accuracy percentage signifies a potential risk to subsequent tasks in the software development lifecycle of projects (Nagar, ¶ 37, With each vendor having its own node, the software development company may associate with each vendor's node the risk-percentage for outsourcing the CBP to that specific vendor. The software development company may begin this process by assigning risk percentages to risks shared by all vendors (discloses accuracy percentage signifying risk). This risk percentage may be calculated using the previously-identified risks in the task-vendor risk-profiles. For example, a software development company may associate a sixty percent risk of knowledge transfer to outsourcing the CBP to any vendor, regardless of the vendor chosen. The software development company may assign this risk percentage to link 410, which connects outsourcing node 400 with complete business process node 415), (Id., ¶ 38, The software development company may then assign risk percentages to risks posed by each individual vendor. The software development company may identify the individual risks by referring to the previously created risk-profiles. The individual risk percentages may be reflected in the links 430, 432, and 434 connecting each vendor node 420, 422, and 424, respectively, to complete business process node 415. The set of all risks taken together for each vendor-process scenario constitutes the decision-tree risk-profile associated with the scenario), (Id., ¶ 43, A software development company may associate individual risks for outsourcing a specific subset of the tasks of the CBP to a specific vendor with the link connecting that vendor to the specific subset node 510 or 520. For example, outsourcing subset X of the CBP to vendor i may involve a fifty-five percent risk of work stoppages (discloses downstream risk) because of the threat of a volcano eruption near the facilities used by vendor i. In this case, the software development company may associate this work stoppage risk percentage with link 516 connecting vendor node 512 to subset X node 510. Each of links 516, 518, 526, and 528 may have individual risk percentages associated with specific risks for outsourcing a specific subset of tasks of the CBP to an individual vendor);
and combine the KPI scores, the accuracy percentage and the spillover risk values and the defect density values for risk assessment in the software development lifecycle of projects to generate indicators of risks, wherein the system employs the generated indicators of risks for predictive risk assessment to monitor and minimise risks and improve overall performance of software development lifecycle projects in real-time (Id., ¶ 53, The KPIs to measure vendor performance may include factors such as cost, on-time delivery, code quality, and software performance. Most of these KPIs may be tangible and directly related to the vendor. The cost component may be measured against the expected cost for performing similar function internally within the software development company. Cost overruns may be a factor when measuring this KPI. On-time delivery may be measured against specified due dates. A good measure of code quality may be the number of bugs in the developed code. Finally, software performance may be measured against metrics such as computational performance, load time, scalability, and performance as perceived by the users), (Id., ¶ 29, The task of Quality Management 225 may present a good candidate for outsourcing for some software developing companies. An iterative process, Quality Management 225 may be closely aligned to Code Development 220. Using the product requirements document prepared by Product Management 215, Quality Management 225 may ensure that the features specified by Product Management 215 are correctly implemented in the final product. The performance of Quality Management 225, and implicitly of Product Development 220, may be easily determined in some software developing companies by remotely monitoring product quality. Thus, the company may have an incentive to carefully monitor the reports prepared by Quality Management 225 to reduce, or even minimize, the risks associated with outsourcing Quality Management 225. Therefore, despite depending on Product Management 215, Quality Management 225 may be outsourced. Some software developing companies may find it advantageous to outsource Quality Management 225 and Coding 220 to the same vendor. Doing so may allow Quality Management 225 to perform its task more efficiently by using the knowledge gained by Coding 220 of the developed software), (Id., ¶ 20, As shown in the exemplary flow chart of FIG. 1, step 120 includes identifying one or more tasks to outsource. These tasks may include the tasks identified with software development, such as pre-sales, sales, p