Prosecution Insights
Last updated: April 19, 2026
Application No. 19/126,562

PROGRESS PREDICTION METHOD, PROGRESS PREDICTION DEVICE, AND PROGRESS PREDICTION PROGRAM FOR PROJECT

Non-Final OA §101§102§103
Filed
May 01, 2025
Examiner
WARNER, PHILIP N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Chiyoda Corporation
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
65%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
39 granted / 107 resolved
-15.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following NON-FINAL Office Action is in response to Applicant’s communication filed 05/01/2025 regarding Application 19/112,562. The following is the first action on the merits. Priority Acknowledgment Examiner acknowledges priority claim to Application JP2022/045443 with priority filing date of 12/08/2022. Status of Claim(s) Claim(s) 1-14 is/are currently pending and are rejected as follows. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-14 is/are rejected under 35 U.S.C. 101 because the claimed invention is/are directed towards a judicial exception (i.e. law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1-14 are directed towards an invention for the acquiring data of a progress plan including the progress level of the step and an execution timing for a project, acquiring data of a progress performance including a progress level of a completed part of the plan, calculating a performance period from a reference time points of the step to a predetermined time point at which a predetermined progress level was achieved, calculate a plan period from the reference time point of the step to a time point at which a progress level same as the predetermined progress level is achieved, calculate based on the performance period and the plan period a progress evaluation index regarding progress of the step, and predicting based on the evaluation index, data of the progress after the predetermined time point. These actions fall under a subject matter grouping which the courts have considered ineligible (Math, Organizing Human Activity, and Mental Process). These claims do not integrate the abstract idea into a practical application, and do not include additional elements that provide an inventive concept (are sufficient to amount to significantly more than the abstract idea). Under Step 1 of the Alice/Mayo framework it must be considered whether the claims are directed to one of the four statutory categories of invention. Claim(s) 1-13 are directed towards a method comprising at least one step. Claim(s) 14 is directed towards a product. Accordingly, the claims fall within the four statutory categories of invention, (method and product) and will be further analyzed under Step 2 of the Alice/Mayo framework. Under Step 2, Prong One, of the Alice/Mayo framework it must be considered whether the claims recite any abstract ideas. Independent claim(s) 1 and 14 recite an invention for the acquiring data of a progress plan including the progress level of the step and an execution timing for a project, acquiring data of a progress performance including a progress level of a completed part of the plan, calculating a performance period from a reference time points of the step to a predetermined time point at which a predetermined progress level was achieved, calculate a plan period from the reference time point of the step to a time point at which a progress level same as the predetermined progress level is achieved, calculate based on the performance period and the plan period a progress evaluation index regarding progress of the step, and predicting based on the evaluation index, data of the progress after the predetermined time point which recite the abstract ideas of Math, Organizing Human Activity, and a Mental Process in the following limitations: acquiring data of a progress plan including a progress level of the step and an execution timing corresponding thereto; acquiring data of progress performance including a progress level of a completed part of the progress plan and an execution timing corresponding thereto; calculating, based on the data of the progress performance, a performance period from a reference time point of the step to a predetermined time point at which a predetermined progress level was achieved; calculating, based on the data of the progress plan, a plan period from the reference time point of the step to a time point at which a progress level same as the predetermined progress level is achieved; calculating, based on the performance period and the plan period, a progress evaluation index regarding progress of the step; and predicting, based on the progress evaluation index, data of the progress performance after the predetermined time point. Dependent claim(s) 2-13 merely further limit the abstract idea and are subject to the same rationale expressed above. Under Step 2A, Prong Two, any additional elements are recited Independent claim(s) 14 recites: a processor Dependent claim 4 recites: a first machine learning model Dependent claim 6 recites: a second machine learning model Dependent claim 11 recites: a third machine learning model These additional elements, considered both individually and as an ordered pair do no more than represent mere instructions to implement the abstract idea ("apply it" compute (See MPEP 2106.05(f)). Additionally, the claims represent insignificant extra solution activity (See MPEP 2106.05(g)). These elements are recited with a high degree of generality, and the specification sets forth the general purpose nature of the technologies required to implement the invention (emphasis added). Support for this determination can be found in Paragraph(s) [0079]-[0083], and [0102]-[0104] of Applicant’s specification. Under Step 2B eligibility analysis evaluates whether the claims as a whole amounts to significantly more than the recited exception, i.e. whether any additional element, or combination of elements, adds an inventive concept to the claims (MPEP 2106.05). As explained with respect to Step 2A, Prong Two, there are several additional elements. The processor, and first, second, and third machine learning models are all, at best, the equivalent of merely adding the words “apply it" to the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (See MPEP 2106.05(f). Further the processor represents insignificant extra solution activity (See MPEP 2106.05(g)), specifically that of mere data gathering which is known to be well-understood, routine, or conventional within the art (See MPEP 2106.05(d)(II)). Insignificant extra solution activity, especially that which is well-understood, routine, or conventional in the art does not provide an inventive concept. Even when considered in combination, these additional elements to are not deemed to be sufficient enough to provide an inventive concept onto the abstract idea, therefore, they are not eligible. (Alice Corp., 134 S. Ct. at 2358 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984 (warning against a §101 that turns on "the draftsman's art")). Independent claim 1, and dependent claim(s) 2-3, 5, 7-10, and 12-13 do not recite any further additional elements and are thus rejected for the same reasons enumerated above. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, and 5 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Prieto (US 2014/0052489 A1). Claim(s) 1 and 14 – Prieto discloses the following limitations: a processor (Prieto: Paragraph 24, “It should be noted that while the following description is drawn to time derivative-based program management systems and methods, various alternative configurations are also deemed suitable and may employ various computing devices including servers, interfaces, systems, databases, agents, peers, engines, controllers, or other types of computing devices operating individually or collectively. One should appreciate the computing devices comprise a processor configured to execute software instructions stored on a tangible, non-transitory computer readable storage medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). The software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclosed system. In especially preferred embodiments, the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, or other electronic information exchanging methods. Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network.”) a project including at least one step (Prieto: Paragraph 13, “The inventive subject matter is considered to include a project management system comprising a project database and a project analysis engine operatively coupled with the project database. The project database can be configured to store project metrics with respect to time, wherein the project metrics are associated with a program, or projects related to a program, and can include cost incurred, resources utilized, schedule, completion percentage, personnel usage, man-hour usage, or such parameters that define or characterize attributes of the program. The project analysis engine can be configured to access the project database to select at least one of the project metrics of the project and calculate a disruption metric as a function of at least a third order time derivative of the project metric. The disruption metric can include a disruption index, which is defined to be derived specifically from the third order time derivative of the project metric and which represents jerk or disruption changes in the project metric in the project. A project achievement curve (e.g., planned loading curve, estimated progress curve, etc.), can be obtained based on the project metric and can be displayed on an output device with respect to the disruption metric to portray the duration, timestamp, location, severity of disruption or jerk, or other disruptive aspect of the program. Further, the distribution metric can include an efficiency index, which is defined to be derived specifically from the fourth order time derivative of the program metric and represents a snap or jounce associated with changes in the project metric.”) acquiring data of a progress plan including a progress level of the step and an execution timing corresponding thereto; (Prieto: Paragraph 28, “In FIG. 1, program management system 100 comprises of a project database 110, a project analysis engine 120 operatively coupled to the project database 110, and an output device 130. The project database 110, the project analysis engine 120, and the output device 130 can be operatively connected to each other by a network 115. Network 115 can either be a wired or a wireless network and can comprise a WAN, LAN, VPN, the Internet, cellular telephone network, or other types of network. The project database 110, alternatively also referred to as program database 110 hereinafter, can be configured to store one or more project or program metrics 112A, 112B . . . 112N, which define, present, or objectify the progress or characteristics of a project or program. The project metrics 112A . . . 112N, collectively referred to as project metrics 112 hereinafter, can be associated with a program, or one or more projects related to a program, and can include manpower utilized, cost incurred, resources used, logistics, schedule, test, inspection, completion percentage, personnel usage, man-hour usage, among other such program related metrics.”; Paragraph 31, “Further, a higher order time derivative such as a fourth order time derivative of a project metric can indicate the level of efficiency or inefficiency in a program, i.e. the fourth order time derivative can indicate when a program shows sudden or consistent improvement or inefficiency over a period of time. A fourth order time derivative represents the change in third order time derivative and therefore is configured to represent the noise or disruptions caused or detected in the third order time derivative. Thus, the fourth order time derivative can be considered representative of a level of efficiency or inefficiency in the program. The fourth order time derivative can also be used to detect efficiencies in behavior of project metrics over a period of time that are not detected as disruptions by the third order derivative calculations, thereby increasing the sensitivity to change in project metrics over a given time instant. Furthermore, as each project metric represents a different view of the program progress, one metric might not necessarily reflect efficient application of a resource in a project with respect to a particular point of view, whereas other metrics might appear to represent efficient application of different resources. For example, consider cost and man-hours as two project metrics in a project over a small period of time, T. The project could be executed such that, when looked from the perspective of cost as the project metric through analysis of the third order time derivative of cost, it appears to be running smoothly or as planned without any disruptions. Furthermore, it can also be possible that the fourth order time derivative of the cost project metric shows that the metric lacks efficiency and has multiple small duration negative changes in disruptions. In such a case, it can also be possible that when a fourth order time derivative of man-hours is computed for the same time period, the time derivative depicts a strong efficiency with respect to man-hours spent during a portion of time period T with some number of man-hours saved during the portion. Therefore, multiple project metrics or even higher order derivatives of a single metric can yield different views of behavior or impact of the project metric 112 on the program, and as a result, metrics 112 can be treated individually or in combination.”; Paragraph 41, “An achievement curve 132 can be obtained based on one or more project metrics 112 and can represent actual or planned progress or achievement of the project or program. Disruption metric 122, which is presented through one or more disruption indices, can be displayed on an output device 130 with respect to the achievement curve 132 to portray the duration, timestamp, location, severity of disruption or jerk, or other disruptive attributes of the project. As each disruption index is derived from a third order or higher time derivative of the project metric, it represents the change in rate of change (second order time derivative) of the project metric and therefore indicates the actual reason for the disruption or jerk in a project instead of merely indicating the rate of progress of project. Understanding the actual reason behind the jerk can, as a result, help the people and activities responsible for the jerk and assist in taking necessary measures to overcome the inefficiencies. The output device 130 can be a web browser, a cell phone, a tablet, a printer, a computer device or other type of suitable device.”) acquiring data of progress performance including a progress level of a completed part of the progress plan and an execution timing corresponding thereto; (Prieto: Paragraph 34, “Project metrics 112 can be stored in such a manner that the data relating to these metrics 112 can easily be categorized and retrieved to analyze the project with respect to their progress, resource utilization, among other attributes. For example, a project metric 112 can represent the "percentage of work completed in the project". Although this and other forthcoming embodiments of the present disclosure are explained with respect of this project metric, it would be well appreciated that project metrics can include any parameter that can define or be associated with a project or program. Various other project metrics 112 can, individually or in combination with other, be analyzed to derive their respective third order and fourth order time derivatives to identify disruptions or efficiencies in the project. It has been appreciated by the applicant that a project metric 112 can be presented in a time-series model to explain the change in values of project metric 112 over a defined period of time. Before the project analysis engine 120 processes the project metric 112, the time-series fitted model can also be refined to determine if the model is stationery or presents some seasonality or periodicity in behavior.”; Paragraph 35, “In an embodiment, project analysis engine 120, alternatively also referred to as program analysis engine 120 hereinafter, can be implemented in a server such as a HTTP server or as a web service, PaaS, Iaas, SaaS, cloud or the like. Analysis engine 120 can be configured to access project database 110 to select at least one project metric 112 of a project and calculate a disruption metric 122 as a function of at least a third order time derivative of the project metric. Third order time derivative of a project metric 112, as described above, can be computed by calculating the change in ramp rate or acceleration (second order time derivative) of the concerned project metric 112 with respect to time. Ramp rate represents the change in rate of change of a particular project metric with respect to time. For example, in case the project metric 112 represents % of work completed, change in percentage of work with respect to time, say from 5% to 9% in 5 days (4% per 5 days) and 9% to 11% in the next 5 days (2% per 5 days) represent velocities of the project metric 112 calculated at five day intervals. A ramp rate of change of the project metric 112, on the other hand, represents the acceleration, i.e. the acceleration represents rate of change the velocities (-2% per 5 days) with respect to work completion in 5 days, wherein such acceleration can be computed on a daily basis or according to other desired time interval. Third order time derivative, on the other hand, further evaluates the change in project metric over a period of time and focuses on instantaneous changes in acceleration, which is also commonly referred to as "jerk" in physics. Therefore, increases in acceleration of project metric 112 result in positive values of the third the third order time derivative of project metric 112. Such deviations of actual project metrics from planned values can lead to higher jerks or disruptions, which can be monitored, identified, or evaluated by the third order time derivatives.”) calculating, based on the data of the progress performance, a performance period from a reference time point of the step to a predetermined time point at which a predetermined progress level was achieved; (Prieto: Paragraph 56, “An actual work completed achievement curve 215 is drawn on graph 200 to represent actual percentage of work completed during the time period T. As has earlier been mentioned, "percentage of work completed" is merely an exemplary project metric 112 and any other project metric such as resource utilized or cost incurred, can be used independently or in combination with other project metrics. In an embodiment, curve 215, can also be referred to as achievement curve 132 interchangeably, whereas in an alternate embodiment, the curve 215 can be different from achievement curve 132 in terms of the manner in which their respective project metrics are processed, evaluated, or analyzed. As can be seen from curve 215, even though projected work to be completed after the first 10 days was 10%, the actual work completed was 9%, leading to a lag of 1%. Similarly, even though the projected work to be completed after the first 30 days was 30%, the actual work completed was 32%, leading to a delta of 2% over the projected schedule. As can be seen, the actual work completed curve 215 merely depicts whether the project metric 112 is ahead of or behind the planned projected curve 210 but does not explain or give any details of the reasons, location, timestamp, or activity giving rise to such a lag or leap. Although this document references "derivatives", one should appreciate that such values can be calculated based on difference in discrete values at points in time.”; Paragraph 57, “Project analysis engine 120 can access project database 110 and fetch data related to "percentage of work completed" as the project metric 112 and compute third order time derivative of the project metric 112. The engine 120 can then generate a disruption metric 122 as a function of the third order time derivative, wherein the disruption metric 122 comprises of a disruption index that is derived from the third order time derivative and represents a jerk or disruption in the project due to the selected project metric 112. The disruption metric 122 can also comprise an efficiency index that is derived from fourth order time derivative of the project metric 112 and represents a snap or jounce or efficiency in the project when seen with respect to the selected project metric 112.”; Paragraph 60, “In the present exemplary embodiment, FIGS. 2A and 2B are shown with respect to each other and map in terms of time vs. project metric behavior. To identify jerk and snap clearly, jerk is represented through a dotted line and snap is represented through a continuous line. As was noticed in FIG. 2A, after the first time period of 10 days, the actual project work completed was 9%, which was a lag of 1% over the projected completion of 10%. FIG. 2B can, for this first time period (i.e., per day), through the disruption and efficiency indices stored in the disruption metric 122, identify and present the reason, location, and severity of the jerk or snap though the magnitude of unit of jerk or snap. For example, it can be noticed from representation 250 that there was a jerk or disruption of 2 units on the 7'th day of the project and a snap or efficiency of 3 units on 9'th day of the project, which lead to an overall percentage of work completion as 9%. A project manager or other stakeholder can, based on the representation 250, point to the event responsible, location, duration, and severity of the jerk (on 7'th day) and snap (on 9'th day) so as to understand the rationale behind the overall behavior of the project metric 112 and take a more informed decision on the project or program including allowing project teams to take necessary steps to improve the identified reasons for jerks. Reasons for disruption and efficiency can be measured and evaluated based on the details of the activities undertaken during the days or time of jerk or snap. In an embodiment, multiple project metrics can be evaluated simultaneously to assess and evaluate one or a combination of parameters that affect the project.”) calculating, based on the data of the progress plan, a plan period from the reference time point of the step to a time point at which a progress level same as the predetermined progress level is achieved; (Prieto: Paragraph 13, “The inventive subject matter is considered to include a project management system comprising a project database and a project analysis engine operatively coupled with the project database. The project database can be configured to store project metrics with respect to time, wherein the project metrics are associated with a program, or projects related to a program, and can include cost incurred, resources utilized, schedule, completion percentage, personnel usage, man-hour usage, or such parameters that define or characterize attributes of the program. The project analysis engine can be configured to access the project database to select at least one of the project metrics of the project and calculate a disruption metric as a function of at least a third order time derivative of the project metric. The disruption metric can include a disruption index, which is defined to be derived specifically from the third order time derivative of the project metric and which represents jerk or disruption changes in the project metric in the project. A project achievement curve (e.g., planned loading curve, estimated progress curve, etc.), can be obtained based on the project metric and can be displayed on an output device with respect to the disruption metric to portray the duration, timestamp, location, severity of disruption or jerk, or other disruptive aspect of the program. Further, the distribution metric can include an efficiency index, which is defined to be derived specifically from the fourth order time derivative of the program metric and represents a snap or jounce associated with changes in the project metric.”; Paragraph 33, “Further, apart from conventional project metrics, a number of new or customized project metrics can be defined where analysis of one or a combination such introduced project metrics can throw deeper insights into the execution, planning, or implementation of a program. Certain exemplary project metrics can include email communications of program or project teams, text messages, employee retention, recruitment pattern, inputs from multiple project management tools, quality reviews and number of defects, extent of rework, stakeholder involvement, impact on management, change in scope, cost-performance indicator, schedule performance index (SPI), sensor outputs, response times, an index indicating comparison between planned value (PV), actual cost (AC) and Earned Value (EV), or other such measures that are directly or indirectly indicative of the above.”; Paragraph 39, “As has been described above, project metrics 112 can also include multiple user defined metrics, third order time derivatives of which can be used to flag abrupt or disruptive events. For example, change in rate of exchange of emails between two parties involved in a particular project can be an indicator of a disruption in the project, specifically in the activity defined in the subject line of the emails. Disruption index can therefore be derived based on the third order time derivative of email project metric and an appropriate action can be taken. Furthermore, it should be appreciated that a disruption can also exist or take place in case there is no jerk in a particular project metric over a defined period of time. For example, in case an activity was planned to be initiated on 15'th July, which was expected to raise the cost of the project by USD 500,000, the lack of change in the rate of change of the cost project metric till 18'th July could mean a disruption in the project and a possible suggestion of non-initiation of the activity. In this example, the farther the date of non-change in cost project metric is from 15'th July, the higher can be the value of the disruption index. In other words, a delay in a start date can be an indicator of future disruptions.”; Paragraph 42, “Achievement curve 132 can also be obtained based on planned values for project metrics 112 such as planned schedule, cost, expenditure rates, man-hours, installation rates, or earned value, before or after initiation of the project. Multiple achievement curves 132 can also be represented at the output device 130 for one or a combination of project metrics 112. In another instance, achievement curve 132 can also be modeled or simulated based on previous similar projects, projections of subject matter experts who can make objective assumptions around efficiency with which certain activities would be carried out and also efficiency with which transition between activities would take place (start:stop; ramp-up:ramp-down), or known simulation models such as Monte Carlo techniques. Therefore, achievement curve 132 could be built based on a statistical aggregation of model programs or projects from Monte Carlo simulations. Project metric data that is represented with respect to time by the achievement curve can be hereinafter referred to as baseline data.”) calculating, based on the performance period and the plan period, a progress evaluation index regarding progress of the step; and (Prieto: Paragraph 42, “Achievement curve 132 can also be obtained based on planned values for project metrics 112 such as planned schedule, cost, expenditure rates, man-hours, installation rates, or earned value, before or after initiation of the project. Multiple achievement curves 132 can also be represented at the output device 130 for one or a combination of project metrics 112. In another instance, achievement curve 132 can also be modeled or simulated based on previous similar projects, projections of subject matter experts who can make objective assumptions around efficiency with which certain activities would be carried out and also efficiency with which transition between activities would take place (start:stop; ramp-up:ramp-down), or known simulation models such as Monte Carlo techniques. Therefore, achievement curve 132 could be built based on a statistical aggregation of model programs or projects from Monte Carlo simulations. Project metric data that is represented with respect to time by the achievement curve can be hereinafter referred to as baseline data.”; Paragraph 43, “Achievement curve 132 can be represented as a mathematical equation (e.g., a curve, a fitted curve to actual data, fitted curve simulation data, hybrid data, etc.), which aims to represent utilization of resources over the proposed time of the project. The curve 132 can also be illustrated as a combination of two or more curves that help represent side by side comparisons of actual time and expenditure components (actual project metric values) vs. proposed time and costs allocations of specific resources (planned values for the project metrics).”; Paragraph 54, “FIGS. 2A and 2B illustrate exemplary graphs showing disruption or jerk or efficiency or snap or jounce in a project through analysis of a project metric 112 with respect to time. In the present disclosure, as an exemplary illustration, the graphs are drawn for "percentage of work completed" as the project metric 112. The graphs can help in understanding the disruptions or efficiencies that occur during a project. FIG. 2A illustrates a projected achievement curve 210 for a project metric 112 (percentage of work completed for example) with respect to time and also illustrates the actual obtained achievement curve 215 for the project metric 112 of the project, plotted over percentage of work completed against time period in two axis. FIG. 2B, on the other hand, through disruption metric 122 comprising one or more disruption and efficiency indices, illustrates the magnitude and location of jerk or snap in the project. FIG. 2A and FIG. 2B can be described in a better manner as below.”) predicting, based on the progress evaluation index, data of the progress performance after the predetermined time point. (Prieto: Paragraph 55, “FIG. 2A illustrates a graph 200 for percentage of work completed over time period T. In the present exemplary embodiment, "percentage of work completed" depicts a project metric 112 and is expressed as work completion percentage taken along Y axis. Amount of time taken to finish the specified amount of work is illustrated on X axis as time period T of the project. For illustration and simplicity, percentage of work completed is divided into 10 divisions, wherein each division indicates 10% of project work completed. Time period T along X axis can be divided into number of days after which the work rate is to be calculated and in the present illustration, the defined time period interval is 10 days. Therefore, after every 10 days, the percentage work completed is calculated for the present illustration. A projected work completion achievement curve 210 presenting the expected progress of the project can be estimated and drawn on the graph 200. Curve 210 can be drawn and defined before the project is initiated. The projected work completion curve 210 represents the percentage of work expected to be completed at defined time period intervals. For example, in the present illustration of FIG. 2, 10% of work involved in the project is expected to be completed within first 10 days and 50% of the work is expected to be completed within 50 days from initiation of the project. This work rate, in the present illustration, would lead to 100% work completion after 100 days. Therefore, a project or program manager or a user can plan that the project will be completed in 100 days and that the project be monitored after every 10 days to understand the progress of the program.”; Paragraph 69, “Project analysis engine 120 can generate a disruption metric 122 comprising mean square derivative (MSD) value of fourth order time derivative of one or more project metrics 112 of a program, wherein mean square derivative (MSD) value is a sum of squares of periodic efficiency values or snaps observed over a number of time periods. MSD value can be computed by calculating fourth order time derivatives of a project metric at periodic time intervals (such as daily) of a defined duration of project execution (such as 1 month) and then calculating mean of the derivative values. MSD value can be stored in the disruption metric 122 along with disruption and efficiency indices. Efficiency index can also be configured to include an absolute value derived from the fourth order time derivative of one or more project metrics 112. Such absolute values have already been illustrated in FIG. 2B. In another implementation, efficiency index can include cumulative efficiency or inefficiency over a defined period of time, wherein the cumulative efficiency can include a sum of all absolute values derived from the fourth order time derivatives for the defined period.”; Paragraph 70, “FIG. 3A represent an S-curve 315 with respect to a project metric 112 such as man-hours and time. Any other suitable project metric based on output, cost, scope of work, procurement, logistics, risk, communication, human resource plan, quality, or number of activities can also be incorporated. S-curve 315 is an S-shaped graph produced by Sigmoid formula which calculates the cumulative expenditure of certain project metrics against time. S-curve 315 is typically used against estimates such as projections or budgets on such project metrics 112. The projected curve 310 represents the projections of total number of man-hours planned to be used over the period of 100 days of the project. For simplicity, the projected curve 310 has been shown with a slope of 1, with 10% of the work intended to be covered in 10 days and 100% of the work intended to be covered in 100 days. As can be seen, the S-curve 315 typically represents a slow start and a slow finish with varying acceleration in between, wherein the man-hours consumed are lower than projected in the first half and higher in the second half. For example, after 30 days the number of man-hours to be consumed is planned to be 30 and the actual consumption is 15. Similarly, after 90 days, the number of man-hours to be consumed is planned to be 90 and the actual consumption is 98.”; Paragraph 73, “As a can be seen, the efficiency index or snap can have actual values from negative to positive, wherein the negative values can indicate inefficiencies (such as undesirable disruptions or events leading to lag in projects) and positive values can indicate efficiencies such as on-time or before time start of new activities, email communications indicating no change in scope of an activity, reduction in man-hours, among others. Furthermore, magnitude of efficiency index, as shown on Y axis can indicate the level of efficiency, which can be compared with defined thresholds, so as to take necessary action for highlighting the event responsible, activities undertaken in the event, people responsible for those activities, duration of those activities, importance of each of those activities, and correct or continue the activities depending on efficiency or inefficiency. It should also be noted that, in the present illustration of FIG. 3C, the frequency of efficiency or inefficiency is more in the middle of the time duration of 10 days and highest around the 5'th day, whereas the frequency of variation is relatively lower in the beginning and in the end. However, such a representation is purely project or program specific and can vary.”) Claim(s) 2 – Prieto discloses the limitations of claim 1 Prieto further discloses the following: wherein the progress evaluation index is a ratio of the plan period to the performance period. (Prieto: Paragraph 28, “In FIG. 1, program management system 100 comprises of a project database 110, a project analysis engine 120 operatively coupled to the project database 110, and an output device 130. The project database 110, the project analysis engine 120, and the output device 130 can be operatively connected to each other by a network 115. Network 115 can either be a wired or a wireless network and can comprise a WAN, LAN, VPN, the Internet, cellular telephone network, or other types of network. The project database 110, alternatively also referred to as program database 110 hereinafter, can be configured to store one or more project or program metrics 112A, 112B . . . 112N, which define, present, or objectify the progress or characteristics of a project or program. The project metrics 112A . . . 112N, collectively referred to as project metrics 112 hereinafter, can be associated with a program, or one or more projects related to a program, and can include manpower utilized, cost incurred, resources used, logistics, schedule, test, inspection, completion percentage, personnel usage, man-hour usage, among other such program related metrics.”; Paragraph 35, “In an embodiment, project analysis engine 120, alternatively also referred to as program analysis engine 120 hereinafter, can be implemented in a server such as a HTTP server or as a web service, PaaS, Iaas, SaaS, cloud or the like. Analysis engine 120 can be configured to access project database 110 to select at least one project metric 112 of a project and calculate a disruption metric 122 as a function of at least a third order time derivative of the project metric. Third order time derivative of a project metric 112, as described above, can be computed by calculating the change in ramp rate or acceleration (second order time derivative) of the concerned project metric 112 with respect to time. Ramp rate represents the change in rate of change of a particular project metric with respect to time. For example, in case the project metric 112 represents % of work completed, change in percentage of work with respect to time, say from 5% to 9% in 5 days (4% per 5 days) and 9% to 11% in the next 5 days (2% per 5 days) represent velocities of the project metric 112 calculated at five day intervals. A ramp rate of change of the project metric 112, on the other hand, represents the acceleration, i.e. the acceleration represents rate of change the velocities (-2% per 5 days) with respect to work completion in 5 days, wherein such acceleration can be computed on a daily basis or according to other desired time interval. Third order time derivative, on the other hand, further evaluates the change in project metric over a period of time and focuses on instantaneous changes in acceleration, which is also commonly referred to as "jerk" in physics. Therefore, increases in acceleration of project metric 112 result in positive values of the third the third order time derivative of project metric 112. Such deviations of actual project metrics from planned values can lead to higher jerks or disruptions, which can be monitored, identified, or evaluated by the third order time derivatives.”; Paragraph 51, “Third order time derivative of a project metric 112 can be calculated by identifying change in ramp rate or change in second order time derivative i.e., by identifying changes that occur in rate of change of percentage of work completed in a project over a defined period of time. In the present example, assuming in three days the percentage of work completed increases from 3% to 6%, the rate of change in work completed is 1% per day and change in rate of change (second order time derivative) would give an indication of how fast or slow "the percentage of work completed" becomes in a day or over the period of time. The third order time derivative can therefore be computed as the rate of change of second order, which can, for much smaller time intervals, detect the change in acceleration of the percentage of work completed and yield areas where the change represents an instantaneous disruption or jerk. Similarly, fourth order time derivative (efficiency index) of a project metric 112 can be calculated by identifying changes that occur in the third order time derivative of one or more project metrics. Third order time derivative of the project metric 112 gives a disruption index of the project metric 112, which, when measured alongside time, can indicate event responsible, location, duration, timestamp, severity, or other attributes that led to disruption or jerk in the project during particular time period.”) Claim(s) 3 – Prieto discloses the limitations of claims 1-2 Prieto further discloses the following: wherein a constant value is used as the progress evaluation index. (Prieto: Paragraph 78, “Step 430 includes the analysis engine deriving a disruption metric based on at least a third order time derivative of a project metric that is stored in the project metric database. The selected project metric of the program can be analyzed by reading project metric data over a time period and calculating a third order time derivative of the project metric at one or more time points to assess whether a jerk has happened in the program at any of those time points. Each jerk can represent a disruption (desired or undesired) that occurs the program. The disruption metric can be generated as a function of the third order time derivative of the project metric, and can be represented as f(d.sup.3PM/dt.sup.3), where PM represents the selected project metric. It should be appreciated that although the disruption metric has been described in the present disclosure as a function of the third order time derivative of the project metric, which indicates an indirect computation of the metric from the project metric, the disruption metric can also be directly implemented as equal in value to the third order time derivative of the project metric rather than being a function thereof. Alternatively, based on the project metric involved and program characteristics, the disruption metric can also be computed as a third order time derivative including a constant or offset value. Furthermore, the disruption metric can also represent multiple third order time derivatives of the project metric across different time periods and store them together as part of the metric.”; Paragraph 73, “As a can be seen, the efficiency index or snap can have actual values from negative to positive, wherein the negative values can indicate inefficiencies (such as undesirable disruptions or events leading to lag in projects) and positive values can indicate efficiencies such as on-time or before time start of new activities, email communications indicating no change in scope of an activity, reduction in man-hours, among others. Furthermore, magnitude of efficiency index, as shown on Y axis can indicate the level of efficiency, which can be compared with defined thresholds, so as to take necessary action for highlighting the event responsible, activities undertaken in the event, people responsible for those activities, duration of those activities, importance of each of those activities, and correct or continue the activities depending on efficiency or inefficiency. It should also be noted that, in the present illustration of FIG. 3C, the frequency of efficiency or inefficiency is more in the middle of the time duration of 10 days and highest around the 5'th day, whereas the frequency of variation is relatively lower in the beginning and in the end. However, such a representation is purely project or program specific and can vary.”) Claim(s) 5 – Prieto discloses the limitations of claim 1 Prieto further discloses the following: wherein prediction of the data of the progress performance includes prediction of a completion date of the project, the progress prediction method comprising: (Prieto: Paragraph 35, “In an embodiment, project analysis engine 120, alternatively also referred to as program analysis engine 120 hereinafter, can be implemented in a server such as a HTTP server or as a web service, PaaS, Iaas, SaaS, cloud or the like. Analysis engine 120 can be configured to access project database 110 to select at least one project metric 112 of a project and calculate a disruption metric 122 as a function of at least a third order time derivative of the project metric. Third order time derivative of a project metric 112, as described above, can be computed by calculating the change in ramp rate or acceleration (second order time derivative) of the concerned project metric 112 with respect to time. Ramp rate represents the change in rate of change of a particular project metric with respect to time. For example, in case the project metric 112 represents % of work completed, change in percentage of work with respect to time, say from 5% to 9% in 5 days (4% per 5 days) and 9% to 11% in the next 5 days (2% per 5 days) represent velocities of the project metric 112 calculated at five day intervals. A ramp rate of change of the project metric 112, on the other hand, represents the acceleration, i.e. the acceleration represents rate of change the velocities (-2% per 5 days) with respect to work completion in 5 days, wherein such acceleration can be computed on a daily basis or according to other desired time interval. Third order time derivative, on the other hand, further evaluates the change in project metric over a period of time and focuses on instantaneous changes in acceleration, which is also commonly referred to as "jerk" in physics. Therefore, increases in acceleration of project metric 112 result in positive values of the third the third order time derivative of project metric 112. Such deviations of actual project metrics from planned values can lead to higher jerks or disruptions, which can be monitored, identified, or evaluated by the third order time derivatives.”; Paragraph 55, “FIG. 2A illustrates a graph 200 for percentage of work completed over time period T. In the present exemplary embodiment, "percentage of work completed" depicts a project metric 112 and is expressed as work completion percentage taken along Y axis. Amount of time taken to finish the specified amount of work is illustrated on X axis as time period T of the project. For illustration and simplicity, percentage of work completed is divided into 10 divisions, wherein each division indicates 10% of project work completed. Time period T along X axis can be divided into number of days after which the work rate is to be calculated and in the present illustration, the defined time period interval is 10 days. Therefore, after every 10 days, the percentage work completed is calculated for the present illustration. A projected work completion achievement curve 210 presenting the expected progress of the project can be estimated and drawn on the graph 200. Curve 210 can be drawn and defined before the project is initiated. The projected work completion curve 210 represents the percentage of work expected to be completed at defined time period intervals. For example, in the present illustration of FIG. 2, 10% of work involved in the project is expected to be completed within first 10 days and 50% of the work is expected to be completed within 50 days from initiation of the project. This work rate, in the present illustration, would lead to 100% work completion after 100 days. Therefore, a project or program manager or a user can plan that the project will be completed in 100 days and that the project be monitored after every 10 days to understand the progress of the program.”; Paragraph 60, “In the present exemplary embodiment, FIGS. 2A and 2B are shown with respect to each other and map in terms of time vs. project metric behavior. To identify jerk and snap clearly, jerk is represented through a dotted line and snap is represented through a continuous line. As was noticed in FIG. 2A, after the first time period of 10 days, the actual project work completed was 9%, which was a lag of 1% over the projected completion of 10%. FIG. 2B can, for this first time period (i.e., per day), through the disruption and efficiency indices stored in the disruption metric 122, identify and present the reason, location, and severity of the jerk or snap though the magnitude of unit of jerk or snap. For example, it can be noticed from representation 250 that there was a jerk or disruption of 2 units on the 7'th day of the project and a snap or efficiency of 3 units on 9'th day of the project, which lead to an overall percentage of work completion as 9%. A project manager or other stakeholder can, based on the representation 250, point to the event responsible, location, duration, and severity of the jerk (on 7'th day) and snap (on 9'th day) so as to understand the rationale behind the overall behavior of the project metric 112 and take a more informed decision on the project or program including allowing project teams to take necessary steps to improve the identified reasons for jerks. Reasons for disruption and efficiency can be measured and evaluated based on the details of the activities undertaken during the days or time of jerk or snap. In an embodiment, multiple project metrics can be evaluated simultaneously to assess and evaluate one or a combination of parameters that affect the project.”) changing settings of resources necessary for execution of the step after the predetermined time point in a case where the completion date of the project exceeds a preset reference completion date; (Prieto: Paragraph 69, “Project analysis engine 120 can generate a disruption metric 122 comprising mean square derivative (MSD) value of fourth order time derivative of one or more project metrics 112 of a program, wherein mean square derivative (MSD) value is a sum of squares of periodic efficiency values or snaps observed over a number of time periods. MSD value can be computed by calculating fourth order time derivatives of a project metric at periodic time intervals (such as daily) of a defined duration of project execution (such as 1 month) and then calculating mean of the derivative values. MSD value can be stored in the disruption metric 122 along with disruption and efficiency indices. Efficiency index can also be configured to include an absolute value derived from the fourth order time derivative of one or more project metrics 112. Such absolute values have already been illustrated in FIG. 2B. In another implementation, efficiency index can include cumulative efficiency or inefficiency over a defined period of time, wherein the cumulative efficiency can include a sum of all absolute values derived from the fourth order time derivatives for the defined period.”; Paragraph 70, “FIG. 3A represent an S-curve 315 with respect to a project metric 112 such as man-hours and time. Any other suitable project metric based on output, cost, scope of work, procurement, logistics, risk, communication, human resource plan, quality, or number of activities can also be incorporated. S-curve 315 is an S-shaped graph produced by Sigmoid formula which calculates the cumulative expenditure of certain project metrics against time. S-curve 315 is typically used against estimates such as projections or budgets on such project metrics 112. The projected curve 310 represents the projections of total number of man-hours planned to be used over the period of 100 days of the project. For simplicity, the projected curve 310 has been shown with a slope of 1, with 10% of the work intended to be covered in 10 days and 100% of the work intended to be covered in 100 days. As can be seen, the S-curve 315 typically represents a slow start and a slow finish with varying acceleration in between, wherein the man-hours consumed are lower than projected in the first half and higher in the second half. For example, after 30 days the number of man-hours to be consumed is planned to be 30 and the actual consumption is 15. Similarly, after 90 days, the number of man-hours to be consumed is planned to be 90 and the actual consumption is 98.”; Paragraph 71, “FIG. 3B represents a graphical representation 350 illustrating jerks or disruptions for a short time periods within the project shown in FIG. 3A. Representation 350 shows the third order derivatives of the man-hours project metric 112 with respect to the actual progress curve 315 and illustrates the values of third order time derivatives of the metric 112 for the first 10 days of the project. As a can be seen, the disruption index or jerk can have actual values from negative to positive, wherein the negative disruptions can indicate negative jerks (unanticipated or unexpected) and positive disruptions are expected jerks representing start of new activities, allocation of new man-hours, among others. Furthermore, magnitude of disruption index, as shown on Y axis can indicate the level of disruption, which can later be compared with defined thresholds, so as to take necessary action such as allocating additional man-hours to complete the activity at hand. For example, in case -1 to +1 is the defined threshold for third order time derivatives of man-hours project metric, disruption due to activities undertaken on Day 4 of the program is higher than the defined threshold and therefore such activities along with remedial measures can be reported to the project team on Day 4 itself. It should also be noted that, in the present illustration of FIG. 3B, the frequency of disruption is more in the middle of the time duration of 10 days and highest around the 5'th day, whereas the disruption is relatively lower in the beginning and in the end. However, such a representation is purely project or program specific and can vary.”) and calculating a corrected evaluation index which is obtained by correcting the progress evaluation index based on the changed resources, wherein the prediction of the data of the progress performance is performed based on the corrected evaluation index. (Prieto: Paragraph 74, “The system 100 can further be configured to generate signatures based on certain trends, types, or magnitudes of disruptions, efficiencies, or inefficiencies and store such signatures in program database 110. Such signatures can be generated based on experience of subject matter experts on anticipated disruptions or efficiencies, previous signatures in same or other programs, or common types of jerks or snaps that are generated in a particular type of project. The signatures can either be stored in the database 110 in an encoded format or any other known format, which can help their efficient retrieval or processing by the program analysis engine 120. The signatures can also be updated or new signatures can be formed as the program proceeds over a period time, wherein such update or formation can be based on earlier disruptions detected in the same program, new learning from other parts of the program or from other projects in the same program. The signatures can be configured to store disruption or efficiency characteristics along with their possible reasons and resolutions. The database 110 can be configured to classify the signatures based on whether they relate to efficiencies or disruptions, type of program or project, project metrics involved, % of confidence in suggested resolution or other parameters.”; Paragraph 76, “FIG. 4 presents a method 400 for program management to identify disruptions or efficiencies in a program or project with respect to one or a combination of project metrics of the program. Step 410 can include providing access to a project database that stores project metric objects of one or more project metrics. The project metrics can include manpower; cost incurred; resources utilized; logistics; schedule; test; inspection work, project, program completion percentage; personnel usage; email communication; earned value; man-hour usage; or other program related metrics; wherein the project metric objects can include data related to the project metrics obtained at a particular time period.”; Paragraph 78, “Step 430 includes the analysis engine deriving a disruption metric based on at least a third order time derivative of a project metric that is stored in the project metric database. The selected project metric of the program can be analyzed by reading project metric data over a time period and calculating a third order time derivative of the project metric at one or more time points to assess whether a jerk has happened in the program at any of those time points. Each jerk can represent a disruption (desired or undesired) that occurs the program. The disruption metric can be generated as a function of the third order time derivative of the project metric, and can be represented as f(d.sup.3PM/dt.sup.3), where PM represents the selected project metric. It should be appreciated that although the disruption metric has been described in the present disclosure as a function of the third order time derivative of the project metric, which indicates an indirect computation of the metric from the project metric, the disruption metric can also be directly implemented as equal in value to the third order time derivative of the project metric rather than being a function thereof. Alternatively, based on the project metric involved and program characteristics, the disruption metric can also be computed as a third order time derivative including a constant or offset value. Furthermore, the disruption metric can also represent multiple third order time derivatives of the project metric across different time periods and store them together as part of the metric.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 4, and 6-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Prieto (US 2014/0052489 A1) in view of Mosca (US 2019/0377602 A1) Claim(s) 4 – Prieto discloses the limitations of claim 1 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca discloses the following: wherein the data of the progress plan is determined by a first machine learning model, and the first machine learning model is a model that has learned a relationship between a progress level of a step in a past project and progress performance including an execution timing corresponding thereto. (Mosca: Paragraph 73, “The method may be one including the computer-implemented step of using an inference model, trained using a machine learning algorithm, that trains parameters to characterise probability distributions of the associated outcomes of the scheduled tasks based on the tabulated schedule features.”; Paragraph 166, “An inference model, trained using a machine learning algorithm, that trains parameters to characterise probability distributions of the outcomes of schedule tasks based on the features tabulated in the previous step. [0167] A simulator that uses the probability distributions predicted by the inference model to simulate the outcomes of sequences of tasks in order to simulate progression of schedules over time and construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 6 – Prieto discloses the limitations of claims 1 and 5 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca discloses the following: wherein the resources necessary for execution of the step are determined by a second machine learning model, and the second machine learning model is a model that has learned a relationship between, resources inputted into a step in a past project and a work period of the step after the resources are inputted. (Mosca: Paragraph 168, “The input data comprises completed project schedules containing activities with their associated features and outcomes, where known. Using the input reader, the system reads and processes task fields T.sub.0 . . . T.sub.n that may include among others:”; Paragraph 174, “Relationships (links) to other tasks.”; Paragraph 180, “The transformation process may be learned from data of historical schedules, by presenting data from multiple tasks (optionally, presenting data from multiple tasks at the same time), such that the model can learn information about the typical surrounding context of each task, and how to represent it with respect to its peers. The model used to learn such representations is a deep autoencoder, where the first half is preserved after training to create the transformation. A surrounding context is defined as all other tasks in the same schedule that have a temporal or dependent connection with the task being learned.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 7 – Prieto teaches the limitations of claim 1 Prieto discloses the following limitations: acquiring an influence evaluation model that represents influence of relationship between data regarding each first milestone in the data of the progress performance of the first step and data regarding the corresponding second milestone in the data of the progress performance of the second step on the progress evaluation index of the second step; (Prieto: Paragraph 37, “The disruption metric 122 can include a disruption index, which can be derived from the third order time derivative of the project metric, and can represent the magnitude or severity of jerk or disruption that occurs in the project in view of the project metric. Disruption metric 122 can include one or more disruption indices, wherein each disruption index is derived from one or more project metrics. Multiple disruption indices representing higher order time derivatives of multiple project metrics can be grouped, associated, averaged, or correlated for a particular time window, for better assessment and evaluation of the disruption caused. The time window can either be user defined or configurable, or can be set automatically by the program analysis engine 120. Disruption indices can also be compared with defined thresholds, which may or may not be project metric specific, to help understand the extent to disruption. Although the disruption index is derived from a project metric and represents the magnitude of disruption, a deeper analysis of the disruption index can result in understanding when the disruption or sudden jerk took place, the duration for which the disruption existed, among other desirable attributes of the detected disruption. The disruption indices can be considered a function of the third order derivatives. Therefore, a single disruption metric 122 can give rise to one or more disruption indices depending on the desired analysis or analytical value.”; Paragraph 48, “Disruption metric 122 can include one or more efficiency indices, wherein each efficiency index is derived from one or more project metrics. Multiple efficiency indices, representing fourth order time derivatives of multiple project metrics, can also be grouped, associated, averaged, or correlated for a particular time window, for better assessment and evaluation of the efficiency in the project. The time window can either be user defined or configurable or can be set automatically by the program analysis engine 120. Efficiency indices can also be compared with defined thresholds, which may or may not be project metric specific, to help understand the extent to efficiency. Although the efficiency index is derived from a project metric and represents the magnitude of efficiency, a deeper analysis of the efficiency index can result in understanding the event responsible for affecting the efficiency of the project, the duration for which the event existed, among other desirable attributes of the detected event.”; Paragraph 49, “Efficiency indexes and disruption indexes for multiple project metrics can also be combined in a linear equation with separate or similar weights for each efficiency index and each disruption index so as to take a computationally faster decision on the suggestions to be given to the program team to handle the disruption. The weights can either be predefined based on the relative importance of each project metric under consideration or can be user configured during run-time assessment of the disruption and extent of efficiency.”) calculating a corrected evaluation index which is obtained by correcting the progress evaluation index of the second step based on the influence evaluation model at the predetermined time point; and (Prieto: Paragraph 66, “Project analysis engine 120 can be configured to generate recommendations to a project manager to manage disruption metric 122 in a program based on comparison between projected curve 210 and an actual curve 215, wherein the recommendations can comprise actions to minimize disruptions or maximize snaps in the program. The project analysis engine 120 can also help in optimizing disruption metric 122 of one or more project metrics 112 based on needs of the program manager. In some embodiments, analysis engine 120 can compare characteristics of disruption or efficiency indices (e.g., jerk and snap respectively) to known event signatures. When the characteristics suitably match such signatures, analysis engine 120 can construct a notification or recommendation on corrective actions as derived from information stored within or associated with the known event signatures.”; Paragraph 74, “The system 100 can further be configured to generate signatures based on certain trends, types, or magnitudes of disruptions, efficiencies, or inefficiencies and store such signatures in program database 110. Such signatures can be generated based on experience of subject matter experts on anticipated disruptions or efficiencies, previous signatures in same or other programs, or common types of jerks or snaps that are generated in a particular type of project. The signatures can either be stored in the database 110 in an encoded format or any other known format, which can help their efficient retrieval or processing by the program analysis engine 120. The signatures can also be updated or new signatures can be formed as the program proceeds over a period time, wherein such update or formation can be based on earlier disruptions detected in the same program, new learning from other parts of the program or from other projects in the same program. The signatures can be configured to store disruption or efficiency characteristics along with their possible reasons and resolutions. The database 110 can be configured to classify the signatures based on whether they relate to efficiencies or disruptions, type of program or project, project metrics involved, % of confidence in suggested resolution or other parameters.”) Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: wherein the project includes, as the at least one step, a preceding first step and a subsequent second step, the progress prediction method comprising: (Mosca: Paragraph 218, “With predictions made on a project, as the project enters a phase of construction (or execution), the status of the schedule is measured—actual delivery rates are logged in updated iterations of a schedule. With the actual rate of progress, and our simulated results from Monte Carlo, projections are made to estimate the future progress of projects. The rate of actual progress of the project (up to the actual completion date) is plotted against the rate of planned progress (to the planned completion date), while the outcomes of our predictions are plotted on other curves and are configurable by a user to define a specific probability likelihood. An example is shown in FIG. 11. In FIG. 11, P25 is the projection completion (%) curve for a 25% probability, P50 is the projection completion (%) curve for a 50% probability, and P98 is the projection completion (%) curve for a 98% probability. The user interface is arranged to receive a user selection of a probability value, and to display the projection completion (%) curve corresponding to the selected probability value. Selection of the probability value may be performed by selecting an icon eg. a “+ Add” icon.”; Paragraph 219, “With predictions made for pre-construction and execution phase projects, we are able to estimate the risk distribution of a portfolio of projects. By bucketing the likely duration of an entire project, we are able to estimate how likely a project will be to finish in defined time ranges. Using a Heatmap-style graphical representation, the y-axis indicates the project, and the x-axis shows a range of project completion outcomes. The opacity of each tile, or a numerical value associated with each tile, reflects how likely the outcome is to occur. FIG. 12 shows an example of a heatmap-style graphical representation of how likely a project will be to finish in defined time ranges.”) acquiring information of at least one first milestone in the first step and information of a second milestone corresponding to each first milestone in terms of timing in the second step; (Mosca: Paragraph 74, “The method may be one wherein the predicted duration times include predicted probability distributions, the method including using the predicted probability distributions to simulate the outcomes of sequences of tasks in order to simulate progression of the schedule over time and to construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”; Paragraph 167, “A simulator that uses the probability distributions predicted by the inference model to simulate the outcomes of sequences of tasks in order to simulate progression of schedules over time and construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”) in prediction of the data of the progress performance regarding the second step, predicting, based on the corrected evaluation index, the data of the progress performance after the predetermined time point in the second step. (Mosca: Paragraph 74, “The method may be one wherein the predicted duration times include predicted probability distributions, the method including using the predicted probability distributions to simulate the outcomes of sequences of tasks in order to simulate progression of the schedule over time and to construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”; Paragraph 167, “A simulator that uses the probability distributions predicted by the inference model to simulate the outcomes of sequences of tasks in order to simulate progression of schedules over time and construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”; Paragraph 204, “This process can then be improved further by looking at the aggregate of multiple simulations, and obtaining a training set of final impact of each task—therefore enabling the creation of an impact estimator network. This is useful if, for example, a schedule does not correctly encode all the relevant logical dependencies between tasks.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 8 – Prieto in view of Mosca disclose the limitations of claims 1 and 7 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: comprising accumulating the data of the progress performance of each of multiple completed projects, wherein the first milestone and the second milestone are determined based on the data of the progress performance that has been accumulated. (Mosca: Paragraph 74, “The method may be one wherein the predicted duration times include predicted probability distributions, the method including using the predicted probability distributions to simulate the outcomes of sequences of tasks in order to simulate progression of the schedule over time and to construct probability distributions for schedule milestones and project completion as well as to evaluate risk factors that may adversely impact upon schedule progression.”; Paragraph 87, “The method may be one wherein, for a partially completed construction, a new schedule is generated, including using data relating to the progress of the partially completed construction. An advantage is that an improved schedule is provided, because the improved schedule takes into account the progress of the partially completed construction.”; Paragraph 156, “FIG. 11 shows an example in which the actual progress of the project (up to the actual completion date) is plotted against the planned progress (to the planned completion date), while the outcomes of predictions are plotted on other curves and are configurable by a user to define a respective specific probability likelihood.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 9 – Prieto in view of Mosca disclose the limitations of claims 1 and 7-8 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: wherein each first milestone and each second milestone are determined based on the data of the progress performance of a project with a shortest work period of the second step among the multiple completed projects. (Mosca: Paragraph 75, “The method may be one wherein, in step (ii), the transformation process is learned from data of historical schedules, by presenting data from multiple tasks (optionally, presenting data from multiple tasks at the same time), such that the model learns about the typical surrounding context of each task, and how to represent it with respect to its peers.”; Paragraph 180, “The transformation process may be learned from data of historical schedules, by presenting data from multiple tasks (optionally, presenting data from multiple tasks at the same time), such that the model can learn information about the typical surrounding context of each task, and how to represent it with respect to its peers. The model used to learn such representations is a deep autoencoder, where the first half is preserved after training to create the transformation. A surrounding context is defined as all other tasks in the same schedule that have a temporal or dependent connection with the task being learned.”; Paragraph 210, “The architecture can be modified to predict a single outcome vector r from task sequences by training a recurrent neural network, treating its output as a latent state vector H.sub.a. This vector is passed as an input to a downstream sub-architecture, that may comprise of one or more dense layers. The composite recurrent neural network architecture is trained on historical data and optimised to characterise the probability density function p(r|S.sub.a), which can then be used to predict specific outcomes on the basis of prospective data.”; Paragraph 361, “The resulting gradient for each parameter is then used to calculate the updated parameter from the previous values using an optimizer function (e.g. a gradient descent type optimiser function). The input to the optimiser function for each parameter is the previous value, the corresponding gradient value and a learning rate parameter. In general, gradient descent based optimizers update the parameter in the direction of steepest descent of the loss function with respect to the parameter, scaled by a learning rate. The parameters are replaced with the new values and the process iterates with another batch of training examples.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 10 – Prieto in view of Mosca disclose the limitations of claims 1 and 7 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: comprising accumulating the data of the progress performance of each of multiple completed projects, wherein the influence evaluation model is determined based on the data of the progress performance that has been accumulated. (Mosca: Paragraph 75, “The method may be one wherein, in step (ii), the transformation process is learned from data of historical schedules, by presenting data from multiple tasks (optionally, presenting data from multiple tasks at the same time), such that the model learns about the typical surrounding context of each task, and how to represent it with respect to its peers.”; Paragraph 180, “The transformation process may be learned from data of historical schedules, by presenting data from multiple tasks (optionally, presenting data from multiple tasks at the same time), such that the model can learn information about the typical surrounding context of each task, and how to represent it with respect to its peers. The model used to learn such representations is a deep autoencoder, where the first half is preserved after training to create the transformation. A surrounding context is defined as all other tasks in the same schedule that have a temporal or dependent connection with the task being learned.”; Paragraph 210, “The architecture can be modified to predict a single outcome vector r from task sequences by training a recurrent neural network, treating its output as a latent state vector H.sub.a. This vector is passed as an input to a downstream sub-architecture, that may comprise of one or more dense layers. The composite recurrent neural network architecture is trained on historical data and optimised to characterise the probability density function p(r|S.sub.a), which can then be used to predict specific outcomes on the basis of prospective data.”; Paragraph 361, “The resulting gradient for each parameter is then used to calculate the updated parameter from the previous values using an optimizer function (e.g. a gradient descent type optimiser function). The input to the optimiser function for each parameter is the previous value, the corresponding gradient value and a learning rate parameter. In general, gradient descent based optimizers update the parameter in the direction of steepest descent of the loss function with respect to the parameter, scaled by a learning rate. The parameters are replaced with the new values and the process iterates with another batch of training examples.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 11 – Prieto in view of Mosca disclose the limitations of claims 1, 7, and 10 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: wherein the influence evaluation model is a third machine learning model that has learned a relationship between the data of the progress performance that has been accumulated and the progress evaluation index of the second step. (Mosca: Paragraph 207, “A sequence of tasks may be represented as a sequence of vector-encodings, or therefore as a bidimensional matrix. While each of these vector-encodings can be treated independently, they are indexed and therefore it is possible to evaluate any conditional statistical dependencies on one task by its neighbours. When learning representations from historical data, the direction of time for learning these statistical dependencies can be in either direction.”; Paragraph 208, “If n tasks T.sub.a1, T.sub.a2, . . . , T.sub.an, belong to an indexed matrix S.sub.a within a schedule, any statistical dependencies between the tasks are captured using recurrent neural network architectures, such as long short-term memory unit or gated recurrent unit variants. While the vectors T.sub.a1, T.sub.a2, . . . , T.sub.an would comprise the inputs to the recurrent neural network, the outputs can be trained according to the chosen optimisation problem. If associated task outcomes are R.sub.a1, R.sub.a2, . . . , R.sub.an are used, then the network is trained to predict outcomes from sequences. If neighbouring task vectors are used then the network (for example T.sub.a1+1, T.sub.a2+1, . . . , T.sub.an+1, using an offset of 1) is trained to predict past or future sequences.”; Paragraph 357, “Each task in each schedule in the training data set is first run through the complete auto-encoder, and the output used to update the trainable parameters in the auto-encoder. This update may done one task or schedule at a time, or in any kind of batch system for example. Once trained, the first half (encoder) is then preserved and used to create the transformation to the task vector when training the second algorithm (and during operation). The part within the dashed line in FIG. 7(a) is an example of a part retained to transform tasks during the inference stage of the machine learning process and during operation. The auto-encoder is thus used to create the embedded space transformation of a task.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 12 – Prieto in view of Mosca disclose the limitations of claims 1 and 7 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: wherein the corrected evaluation index is calculated based on a first milestone period from the reference time point to each first milestone, a second milestone period from the reference time point to each second milestone, and a difference between each first milestone period and each second milestone period corresponding thereto. (Mosca: Paragraph 205, “For projects currently in execution, additionally to learning from the history of past projects, the learning can be extended to schedules with information about the partial completion of the project. The aggregate difference for tasks that appear in both updates (a and b) can be used to estimate the additional risk added to the project.”; Paragraph 213, “Given a description of a potential task, it is possible to infer its most likely vector representation, even in the presence of gaps. This is achieved by initially substituting the missing inputs with null-values, and obtaining an initial vector representations T.sub.x. A neighbourhood, limited by a distance d, is defined around T.sub.x, to obtain a collection of tasks T.sub.i . . . T.sub.n such that |T.sub.x−T.sub.i|<d. The collection of tasks in the neighbourhood is then used to infer the correct values for the missing null-values in the original vector, by use of the mean or other statistical methods.”; Paragraph 303, “The next step S203 is referred to as the task transformation stage. FIGS. 3(a) and 3(b) show the transformation of the task information comprising the extracted task features into the task vector in more detail. In the task transformation stage, a translator module converts tasks from different schedule contexts into a single multi-dimensional vector space.”; Paragraph 360, “The auto-encoder output vectors may be used to determine a loss, using the input vectors (or part of the input vectors) as ground truths. For each task the gradient of the loss with respect to each of the trainable parameters of the auto-encoder neural network (i.e. the weights and biases) can be determined through back-propagation, and used to update the parameters. A negative log-likelihood loss function may be used for example. The tasks may be inputted in batches, for example a schedule at a time or any other batch size, and the update performed for each batch. Every operation performed in the forward pass of the auto-encoder is differentiable and therefore a functional expression for the derivative of the loss with respect to each parameter can be determined by the chain rule. The gradient values are calculated from these expressions using the back-propagated error and the activations (inputs for each layer from the forward pass, cached during the forward pass). This results in an array of gradient values, each corresponding to a parameter, for each task in the batch. These are converted to a single gradient value for each parameter (for example by taking the average of the gradient values for all tasks for the particular parameter in the batch).”; Paragraph 374, “The output vectors (i.e. comprising the probability values generated by the second algorithm) may be used to determine a loss, where the outcome vectors (the “1 versus all” representation extracted directly from the actual data relating to the completed project) are used as ground truths. The tasks may be inputted in batches and the update performed per batch. For each task inputted in a batch, the gradient of the loss with respect to each of the trainable parameters in the second algorithm (e.g. the weights and biases) can be determined through back-propagation, and used to update the parameters. A cross entropy loss function may be used for example. This results in an array of gradient values, each corresponding to a parameter, for each task in the batch. These are converted to a single gradient value for each parameter (for example by taking the average of the gradient values for all tasks for the particular parameter). The resulting gradient for each parameter is then used to calculate the updated parameter from the previous values using an optimizer function (e.g. a gradient descent type optimiser function). The parameters are replaced with the new values and the process iterates with another batch of training signals.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Claim(s) 13 – Prieto in view of Mosca disclose the limitations of claims 1 and 7 Prieto does not explicitly disclose the following, however, in analogous art of project management, Mosca teaches the following: wherein in a case where a delay has occurred in the timing of the first milestone in the data of the progress performance of the first step, the timing of the corresponding second milestone in the data of the progress plan of the second step is changed to after the timing of the first milestone in which the delay has occurred. (Mosca: Paragraph 197, “The following procedure describes how to accumulate the impact of a single task on the end date of the schedule's project.”; Paragraph 200, “b) Apply all constraints and recalculate the end date of the project based on the new end date of the task at hand, assuming all other tasks with no delay”; Paragraph 201, “c) Record the delay caused to the whole project for the task at hand”; Paragraph 202, “2) The cumulative record of all end date impacts represents a distribution of impact that the task will have on the end date of the project.”; Paragraph 217, “With our predictions made on a range of schedules, we are able to make aggregated estimations of an entire portfolio of projects. We are able to communicate the projects which are predicted to cause significant delays; highlight specific milestones for the attention of project directors and board-level executives; and make projections about how project execution may manifest in the future.”; Paragraph 332, “Next, the projected end date of the schedule is recalculated in S206. A distribution of possible outcomes for the schedule is generated from the plurality of durations corresponding to each task in the schedule output from S205. In this step, the distribution of the final schedule outcome is generated, by applying the new sampled durations to the entire schedule, with constraints. The constraints may comprise the schedule order (e.g. maintaining the predecessor/successor relationships), for example if a task is delayed, the start date of a successor task will have to be moved forward.”) Prieto discloses a method of determining progress and predictions of a project based on various acquired data values. Mosca discloses a method for determining the status of a construction using various machine learning models. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Prieto with the teachings of Mosca in order to improve the accuracy of construction project scheduling as disclosed by Mosca (Mosca: Paragraph 5, “There is a continuing need to improve the accuracy of construction project scheduling.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Saha (US 2022/0383231 A1) discloses a method for construction site based status generation based on physical relationships Kar (US 2023/01926291 A1) discloses a method for monitoring projects and providing alerts Lance (US 2015/0012324 A1) discloses a method for a project management system Hsieh (US 2023/0351283 A1) discloses a method for an intelligent project optimization experience Sakaki (US 2016/0004583 A1) discloses a method for a project management system from non-function evaluations Van Velzen (US 2015/0051932 A1) discloses a method for a concurrency-based project management system Draperi (US 2007/0016461 A1) discloses a method for a method for scheduling and control of a project Maschke (US 2004/0078096 A1) discloses a method for determining a graphic representation of a project Tanaka (US 2025/0190904 A1) discloses a method for a management apparatus for a project Vigoda (US 2018/0109574 A1) discloses a method for a machine learning collaboration system Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-6787.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip N Warner/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

May 01, 2025
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103
Mar 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596974
MULTI-LAYER ABRASIVE TOOLS FOR CONCRETE SURFACE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596984
INFORMATION GENERATION APPARATUS, INFORMATION GENERATION METHOD AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12579490
GENERATING SUGGESTIONS WITHIN A DATA INTEGRATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567011
BATTERY LEDGER MANAGEMENT SYSTEM AND METHOD OF BATTERY LEDGER MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493819
UTILIZING MACHINE LEARNING MODELS TO GENERATE INITIATIVE PLANS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
65%
With Interview (+28.6%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month