Prosecution Insights
Last updated: April 19, 2026
Application No. 18/846,964

MAN-HOUR INPUT PLAN GENERATION SYSTEM AND MAN-HOUR INPUT PLAN GENERATION METHOD

Final Rejection §101§103
Filed
Sep 13, 2024
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hitachi Astemo, Ltd.
OA Round
2 (Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is a Final Office Action rejection on the merits. Claims 1, 3-8, and 10-14 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement (IDS) The information disclosure statement(s) filed on 09/13/2024 comply with the provisions 37 CFR 1.97, 1.98, and MPEP 609 and is considered by the Examiner. Response to Arguments Applicant's arguments filed on 02/16/2026 (related to the 103 Rejection) have been fully considered but are moot in view of new grounds of rejection. Applicant's amendments necessitated the new ground(s) of rejection presented in this Office action. Rejection based on a newly cited reference(s) follows. Applicant's arguments filed on 02/16/2026 (related to the 101 Rejection) have been fully considered but they are not persuasive. Applicant states, on pages 7-9, that the specification explains that "a time-series man-hour input amount which associates the man-hour input amount and the input time thereof with each other affects the total man-hour effort, but ... the total man-hour effort is not predicted through use of the time-series man-hour input amount, and hence, there is room for increasing the accuracy in this respect," in paragraph [0005]. That is, there are data modeling deficiency and computational processing challenge in prior systems, resulting in inabilities to, for instance, ingest and process time-series man-hour allocation data as structured input, improve prediction accuracy, and dynamically update predictions while editing the input plan. As such, the specification provides "a man-hour input plan generation system ... including an input section ... [that] receives input of a man-hour input plan directed to a project, an analysis section ... [that] analyzes the input man-hour input plan input from the calculation device, and an output section ... [that] outputs an analysis result obtained by the analysis section," in paragraph [0007]. In particular, the specification explains that "[t]he analysis section collates the input time-series data with a plurality of man-hour input patterns stored in advance ... extracts a past project common in a characteristic to the identified man-hour input pattern ... and predicts at least one of a total man-hour effort and a quality of the man-hour input plan on a basis of performance of the extracted past project," in paragraphs [0034]-[0036], and that "[t]he man-hour input plan input section 101 provides the GUI... the user operates, in an upward or downward direction, an edit point indicating an input man-hour effort for each period, thereby allowing the man-hour input amount to be edited ... the analysis section 102 executes the analysis, thereby updating the display content of the output section 103," in paragraphs [0013] and [0014]. Additional examples are also provided in the specification. For at least the forgoing reasons, Applicant respectfully requests withdrawal of the rejection of claim 1 under 35 U.S.C. § 101. Examiner respectfully disagrees with Applicant. These claim elements are still considered to be abstract ideas because they are directed to “mathematical concepts” which include “mathematical calculations.” In this case, “predicting at least one of a total man-hour effort and a quality metric based on the performance data of the at least one past project” recites mathematical calculations (see MPEP 2106.04(a)(2)). If a claim limitation, under its broadest reasonable interpretation, covers mathematical calculations, then it falls within the “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The main functions of the additional elements recited in claim 1 are merely used to: collect data (e.g. a man-hour input plan and past projects) and analyze the data (e.g. retrieve at least one past project from the past projects based on a match and predict at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan). Those are functions that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Thus, the claim is not patent eligible. Independent claim 8 recites similar features and therefore is rejected for the same reasons as independent claim 1. Claims 3-7 and 10-14 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1 and 8. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, and 10-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to an apparatus which is a statutory category. Step 2A, Prong One - Claim 1 recites: A system comprising to: store predefined shape patterns representing distributions of man-hour input over time for respective man-hour input plans and past projects, each associated with a shape pattern for a corresponding man-hour input plan and performance data; receive a man-hour input plan directed to a project, the man-hour input plan comprising time-series data associating a man-hour input amount with a corresponding time period; collate the time-series data with the predefined shape patterns to classify the time-series data into a shape corresponding to one of the predefined shape patterns; retrieve at least one past project from the past projects based on a match between the shape pattern of the at least one past project and the shape classified for the time-series data; and predict, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan. These claim elements are considered to be abstract ideas because they are directed to “mathematical concepts” which include “mathematical calculations.” In this case, the limitations of “identifying man-hour input pattern from among past projects” and “predicting at least one of a total man-hour effort and a quality of the man-hour input plan on a basis of performance of the extracted past project” recite mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers mathematical calculations, then it falls within the “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a processor; and a storage device. The processor is merely used to implement the project total man-hour effort/quality prediction simulation system (Paragraph 0030). The storage device is merely used to store programs and data used when the programs are executed (Paragraph 0015). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “processor” and “storage device” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Also, the storage is considered “field of use” since it's just used to store and provide input information for an analysis, but the technology is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of predicting, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan. The specification shows that the processor is merely used to implement the project total man-hour effort/quality prediction simulation system (Paragraph 0030). The storage device is merely used to store programs and data used when the programs are executed (Paragraph 0015). Also, the storage device is considered a well-understood, routine, and conventional function since it’s just “receiving or transmitting data over a network” and “storing information in a memory” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent claim 8 is directed to a method at step 1, which is a statutory category. Claim 8 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Thus, the claim is ineligible. Dependent claims 3-4, 6, 10-11, and 13 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as to: generate ideal time-series based on the characteristic data; and generate the ideal time-series data based on the milestone data in the at least one past project. At Step 2A, Prong 2 - this is still considered “field of use” since it’s just used to receive additional data for an analysis, but the technology is not improved (MPEP 2106.05h). At Step 2B – this is considered a conventional computer function of “receiving and transmitting over a network” and “performing repetitive calculations” (MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claims 5, 7, 12, and 14 are directed to an additional element such as: a GUI. The GUI is merely used to edit the original plan and edit the simulated man-hour input plan by the user (Paragraphs 0013-0014). The GUI is considered “field of use” (MPEP 2106.05h) at step 2A, Prong 2; as it’s just used to receive an adjustment of the man-hour plan by a user, but the GUI is not improved (MPEP 2106.05h). At step 2B, instructions to display and/or arrange information in a graphical user interface may not be sufficient to show an improvement in computer-functionality (MPEP 2106.05a). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 6, 8, 10-11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Cantor et al. (US 2013/0325763 A1), in view of Meharwade et al. (US 2019/0122153 A1). Regarding claim 1 (Currently Amended), Cantor et al. discloses a system comprising: at least one processor to (Paragraph 0209, The components of computer system may include, but are not limited to, one or more processors or processing units 12, a system memory 16, and a bus 14 that couples various system components including system memory 16 to processor 12): store, in a storage device, predefined shape patterns representing distributions of man-hour input over time for respective man-hour input plans and past projects, each associated with a shape pattern for a corresponding man-hour input plan and performance data (Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery. Pattern increased velocity: The rate at which work is being completed has increased. Example variants: improved efficiency, more time spent, overestimated work; Paragraph 0209, item 18, storage device; Examiner interprets the team velocity and/or trends as the predefined shape patterns representing distributions of man-hour input over time. In this case, the system predicts task effort (e.g., man-hour) by analyzing trends/patterns of similar projects); receive a man-hour input plan directed to a project, the man-hour input plan comprising time-series data associating a man-hour input amount with a corresponding time period (Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution); collate the time-series data with the predefined shape patterns to … the time-series data into a shape corresponding to one of the predefined shape patterns (Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery. Pattern increased velocity: The rate at which work is being completed has increased. Example variants: improved efficiency, more time spent, overestimated work; Paragraph 0209, item 18, storage device; Examiner interprets the team velocity and/or trends as the predefined shape patterns representing distributions of man-hour input over time. In this case, the system predicts task effort (e.g., man-hour) by analyzing trends/patterns of similar projects); retrieve at least one past project from the past projects based on a match between the shape pattern of the at least one past project and the shape … for the time-series data (Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks); and predict, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; It can be noted that the claim language is written in alternative form. The limitation taught by Cantor et al. is based on a predicted total man-hour effort). Although Canto et al. discloses all the limitations above and a machine learning used to predict a total man-hour effort based on predefined shape patterns (e.g. historical patterns/trends), Canto et al. does not specifically disclose how the machine learning classifies the historical data. However, Meharwade et al. discloses collate the time-series data with the predefined shape patterns to classify the time-series data into a shape corresponding to one of the predefined shape patterns (Paragraph 0078, The velocity predictor 132 may implement supervised machine learning including a seasonal trend decomposition (STL) model that may represent a time series model. With respect to selection of the seasonal trend decomposition model, since correlation between an influencing variable and a target variable may not be established, the time-series model may be used with the velocity predictor 132. Further, since the variables may include sufficient data points with seasonality and trend, the seasonal trend decomposition model may be used for the velocity predictor 132. The seasonal trend decomposition model may provide for modeling of the seasonality and trend using historical data for use for future predictions; Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); Examiner interprets the seasonal trend decomposition as the classification of the time-series data); retrieve at least one past project from the past projects based on a match between the shape pattern of the at least one past project and the shape classified for the time-series data (Paragraph 0078, The velocity predictor 132 may implement supervised machine learning including a seasonal trend decomposition (STL) model that may represent a time series model. With respect to selection of the seasonal trend decomposition model, since correlation between an influencing variable and a target variable may not be established, the time-series model may be used with the velocity predictor 132. Further, since the variables may include sufficient data points with seasonality and trend, the seasonal trend decomposition model may be used for the velocity predictor 132. The seasonal trend decomposition model may provide for modeling of the seasonality and trend using historical data for use for future predictions; Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); Examiner notes that the historical seasonal trend is used for future predictions); and predict, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan (Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); It can be noted that the claim language is written in alternative form. The limitation taught by Meharwade et al. is based on a predicted total man-hour effort). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for predicting a total man-hour effort of a project (e.g., based on similar historical trends/patterns of the project) of the invention of Cantor et al. to further specify how the historical data is classified into different trends/patterns of the invention of Meharwade et al. because doing so would allow the system to model seasonality and trend using historical data, which is used for future predictions (see Meharwade et al., Paragraph 0078). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 8 (Currently Amended), Cantor et al. discloses a method comprising (Paragraph 0006, A method for predicting a probability distribution of completion times for a project, in one aspect, may comprise receiving a set of unfinished tasks belonging to the project and attributes associated with the unfinished tasks. The method may also comprise obtaining a probability distribution of an estimated effort needed to complete each of the unfinished tasks. The method may further comprise determining the probability distribution of completion time for the project based on the probability distribution of an estimated effort needed to complete each of the unfinished tasks): storing, by at least one processor, in a storage device, predefined shape patterns representing distributions of man-hour input over time for respective man-hour input plans and past projects, each associated with a shape pattern for a corresponding man-hour input plan and performance data (Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery. Pattern increased velocity: The rate at which work is being completed has increased. Example variants: improved efficiency, more time spent, overestimated work; Paragraph 0209, item 12, processor; Paragraph 0209, item 18, storage device; Examiner interprets the team velocity and/or trends as the predefined shape patterns representing distributions of man-hour input over time. In this case, the system predicts task effort (e.g., man-hour) by analyzing trends/patterns of similar projects); receiving, by at least one processor, a man-hour input plan directed to a project, the man-hour input plan comprising time-series data associating a man-hour input amount with a corresponding time period (Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution; Paragraph 0209, item 12, processor); collating, by at least one processor, the time-series data with the predefined shape patterns to … the time-series data into a shape corresponding to one of the predefined shape patterns (Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks; Paragraph 0093, Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery. Pattern increased velocity: The rate at which work is being completed has increased. Example variants: improved efficiency, more time spent, overestimated work; Paragraph 0209, item 18, storage device; Paragraph 0209, item 12, processor; Examiner interprets the team velocity and/or trends as the predefined shape patterns representing distributions of man-hour input over time. In this case, the system predicts task effort (e.g., man-hour) by analyzing trends/patterns of similar projects); retrieving, by at least one processor, at least one past project from the past projects based on a match between the shape pattern of the at least one past project and the shape … for the time-series data (Paragraph 0088, As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks; Paragraph 0209, item 12, processor); and predicting, by at least one processor, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0038, The task estimator 108 determining a probability distribution of the estimated effort (e.g., person-hours) for each of the as-yet incomplete tasks belonging to the project; It can be noted that the claim language is written in alternative form. The limitation taught by Cantor et al. is based on a predicted total man-hour effort). Although Canto et al. discloses all the limitations above and a machine learning used for predicting a total man-hour effort based on predefined shape patterns (e.g. historical patterns/trends), Canto et al. does not specifically disclose how the machine learning classifies the historical data. However, Meharwade et al. discloses collating, by at least one processor, the time-series data with the predefined shape patterns to classify the time-series data into a shape corresponding to one of the predefined shape patterns (Paragraph 0078, The velocity predictor 132 may implement supervised machine learning including a seasonal trend decomposition (STL) model that may represent a time series model. With respect to selection of the seasonal trend decomposition model, since correlation between an influencing variable and a target variable may not be established, the time-series model may be used with the velocity predictor 132. Further, since the variables may include sufficient data points with seasonality and trend, the seasonal trend decomposition model may be used for the velocity predictor 132. The seasonal trend decomposition model may provide for modeling of the seasonality and trend using historical data for use for future predictions; Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); Paragraph 0168, item 2102, processor; Examiner interprets the seasonal trend decomposition as the classification of the time-series data); retrieving, by at least one processor, at least one past project from the past projects based on a match between the shape pattern of the at least one past project and the shape classified for the time-series data (Paragraph 0078, The velocity predictor 132 may implement supervised machine learning including a seasonal trend decomposition (STL) model that may represent a time series model. With respect to selection of the seasonal trend decomposition model, since correlation between an influencing variable and a target variable may not be established, the time-series model may be used with the velocity predictor 132. Further, since the variables may include sufficient data points with seasonality and trend, the seasonal trend decomposition model may be used for the velocity predictor 132. The seasonal trend decomposition model may provide for modeling of the seasonality and trend using historical data for use for future predictions; Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); Paragraph 0168, item 2102, processor; Examiner notes that the historical seasonal trend is used for future predictions); and predicting, by at least one processor, based on the performance data of the at least one past project, at least one of a total man-hour effort and a quality metric for the project associated with the man-hour input plan (Paragraph 0139, With respect to the efforts predictor of the task predictor 144, the efforts predictor may utilize input and factors such as user story type from sprint planning. The different use story types of the stories may be added to the selected sprint backlog. The task type may be obtained from the task types predictor and/or sprint planning. The story points may be obtained from sprint planning. In this regard, different and unique story points may be assigned to stories added to selected sprint scope. Further, the historical actual efforts against similar type of tasks may be obtained from an associated database. Output of the efforts predictor may include efforts (e.g., in hours); Paragraph 0168, item 2102, processor; It can be noted that the claim language is written in alternative form. The limitation taught by Meharwade et al. is based on a predicted total man-hour effort). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for predicting a total man-hour effort of a project (e.g., based on similar historical trends/patterns of the project) of the invention of Cantor et al. to further specify how the historical data is classified into different trends/patterns of the invention of Meharwade et al. because doing so would allow the system to model seasonality and trend using historical data, which is used for future predictions (see Meharwade et al., Paragraph 0078). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 3 and 10 (Currently Amended), which are dependent of claims 1 and 8, the combination of Cantor et al. and Meharwade et al. discloses all the limitations in claims 1 and 8. Cantor et al. further discloses wherein the at least one processor is to: receive characteristic data relating to the project; and generate ideal time-series data based on the characteristic data (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task. Referring to FIG. 1, an input for project estimation prediction may include a representation of a set of completed tasks 102, if any, belonging to the project to be estimated and a set of attributes associated to those tasks. Another input may include a set of as-yet incomplete tasks (also referred to as unfinished tasks) 104 belonging to the project to be estimated and a set of attributes associated to those tasks. An unfinished task 104, for example, may have attributes (shown at 106) such as the type, the owner, an initial estimate for completion, a due date, a priority status, and other such characteristics; Paragraph 0045, The learning algorithm operates on the input completed tasks and their attributes, for example, as follows. The learning algorithm may comprise selecting one or more subsets of the tasks. For each subset of tasks, the learning algorithm may select a subset of the attributes of the included tasks. For each subset of tasks, and for each associated subset of attributes of those tasks, the learning algorithm may use the attribute values to make an estimate of the effort of the tasks which have those attribute values. The above steps may be repeated for various subsets of tasks and various subsets of attributes of those tasks). Regarding claims 4 and 11 (Currently Amended), which are dependent of claims 3 and 10, the combination of Cantor et al. and Meharwade et al. discloses all the limitations in claims 3 and 10. Cantor et al. further discloses wherein the at least one processor is to: receive milestone data relating to the project; and generate the ideal time-series data based on the milestone data in the at least one past project (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0045, The learning algorithm operates on the input completed tasks and their attributes, for example, as follows. The learning algorithm may comprise selecting one or more subsets of the tasks. For each subset of tasks, the learning algorithm may select a subset of the attributes of the included tasks. For each subset of tasks, and for each associated subset of attributes of those tasks, the learning algorithm may use the attribute values to make an estimate of the effort of the tasks which have those attribute values. The above steps may be repeated for various subsets of tasks and various subsets of attributes of those tasks; Paragraph 0075, The task estimator 108 of the present disclosure may consider other kinds of information in addition to task attributes. Such other information may be received as input. For instance, the information input to the methodology of the present disclosure and considered in the learning may include information derived or computed from one or more task attributes. The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values; Paragraph 0152, The above example patterns are not exclusive, for example, two or more can apply at the same time. Combinations of more basic patterns may be used to define composite patterns, e.g., repeatedly rescheduled item and rescheduled blocking item together give a repeatedly rescheduled blocking item. Some of the "patterns" may refer to single events. An issue affecting patterns generally is the scope over which they apply, e.g., release cycle, milestone, or iteration). Regarding claims 6 and 13 (Currently Amended), which are dependent of claims 1 and 8, the combination of Cantor et al. and Meharwade et al. discloses all the limitations in claims 1 and 8. Cantor et al. further discloses wherein the at least one processor is to: receive milestone data relating to the project; and predict at least one of the total man-hour effort or the quality metric based on the milestone data in the at least one past project (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0045, The learning algorithm operates on the input completed tasks and their attributes, for example, as follows. The learning algorithm may comprise selecting one or more subsets of the tasks. For each subset of tasks, the learning algorithm may select a subset of the attributes of the included tasks. For each subset of tasks, and for each associated subset of attributes of those tasks, the learning algorithm may use the attribute values to make an estimate of the effort of the tasks which have those attribute values. The above steps may be repeated for various subsets of tasks and various subsets of attributes of those tasks; Paragraph 0075, The task estimator 108 of the present disclosure may consider other kinds of information in addition to task attributes. Such other information may be received as input. For instance, the information input to the methodology of the present disclosure and considered in the learning may include information derived or computed from one or more task attributes. The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values Paragraph 0152, The above example patterns are not exclusive, for example, two or more can apply at the same time. Combinations of more basic patterns may be used to define composite patterns, e.g., repeatedly rescheduled item and rescheduled blocking item together give a repeatedly rescheduled blocking item. Some of the "patterns" may refer to single events. An issue affecting patterns generally is the scope over which they apply, e.g., release cycle, milestone, or iteration). Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Cantor et al. (US 2013/0325763 A1), in view of Meharwade et al. (US 2019/0122153 A1), in further view of Jagannathan et al. (US 2016/0224908 A1). Regarding claims 5 and 12 (Currently Amended), which are dependent of claims 1 and 8, the combination of Cantor et al. and Meharwade et al. discloses all the limitations in claims 1 and 8. Cantor et al. further discloses wherein the at least one processor is to: output at least one of the total man-hour effort or the quality metric as an analysis result of the man-hour input plan in response to an adjustment to the man-hour input plan … (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0042, The task estimator 108 in one embodiment may use a learning algorithm over attributes of the input completed tasks to discover which task attributes, of those available, are significant in the estimation of task effort. In one embodiment of the present disclosure, a task estimation model 114 may comprise the output of such a learning algorithm. Using the learned information, the task estimator 108 may determine and output a probability distribution of effort for the input as-yet incomplete tasks. Optionally, a categorization of tasks based on the similarity of the effort or time required to complete them may be produced as output; Paragraph 0044, The learning algorithm of the present disclosure in this embodiment may discover (learn) significant attributes and relationships. The learning algorithm provides a different methodology from methods that rely on a fixed set of attributes and with respect to methods that rely on fixed relationships of attributes to effort. The methodology of the present disclosure need not have dependence on any specific attributes; Paragraph 0045, The learning algorithm operates on the input completed tasks and their attributes, for example, as follows. The learning algorithm may comprise selecting one or more subsets of the tasks. For each subset of tasks, the learning algorithm may select a subset of the attributes of the included tasks. For each subset of tasks, and for each associated subset of attributes of those tasks, the learning algorithm may use the attribute values to make an estimate of the effort of the tasks which have those attribute values. The above steps may be repeated for various subsets of tasks and various subsets of attributes of those tasks; Paragraph 0075, The task estimator 108 of the present disclosure may consider other kinds of information in addition to task attributes. Such other information may be received as input. For instance, the information input to the methodology of the present disclosure and considered in the learning may include information derived or computed from one or more task attributes. The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery: feature burn-up; repeatedly rescheduled (blocking) feature; scope creep; repeatedly rescheduled item; schedule delayed; schedule advanced; seasonal event; and/or shifted capacity, work effort has shifted from one area to another (e.g., from development to support or vice versa); Examiner notes that the initial estimated effort (e.g., man-hour effort) can be adjusted based on detected changes/patterns similar to historical patterns. It can be noted that the claim language is written in alternative form. The limitation taught by Cantor et al. is based on a predicted total man-hour effort). Although Cantor et al. discloses to output the total man-hour effort or the quality metric as an analysis result of the man-hour input plan in response to an adjustment to the man-hour input plan (e.g. shifted capacity, work effort has shifted from one area to another), Cantor et al. does not specifically disclose wherein the man-hour input plan is edited by a user. However, Jagannathan et al. discloses wherein the at least one processor is to: output at least one of the total man-hour effort or the quality metric as an analysis result of the man-hour input plan in response to an adjustment to the man-hour input plan by a user (Paragraph 0002, According to some possible implementations, a device may provide a user interface for receiving project information for a software implementation project. The project information may be associated with a set of requirements defining the software implementation project. The project information may be associated with a set of deliverables describing results of the software implementation project. The device may generate an initial project plan based on the project information. The device may receive information regarding the initial project plan during fulfillment of the project plan. The device may selectively provide an alert associated with the initial project plan based on receiving the information regarding the initial project plan. The device may selectively generate a modified project plan based on receiving the information regarding the project plan; Paragraph 0019, Furthermore, the host server may determine an impact of a change to a project metric, a new requirement, a new change requirement, or the like, may generate an alert, a modified project plan, or the like based on the impact, and may provide the alert, the modified project plan, and/or information associated therewith; Paragraph 0021, In some implementations, user device 210 may provide a user interface for receiving project information, a selection of a project plan, or the like; Paragraph 0057, As shown in FIG. 5D, host server 220 generates a project plan, of a set of project plans, for the software implementation project. Host server 220 identifies a set of tasks for generating the deliverables, a duration for the set of tasks, an effort for the set of tasks (e.g., in hours, days, or the like), or the like; Paragraph 0073, As shown in FIG. 8, process 800 may include receiving a change requirement for a software implementation project (block 810). For example, host server 220 may receive a change requirement for a software implementation project. A change requirement may refer to a requirement received during completion of a software implementation project that alters one or more requirements of the software implementation project. In some implementations, host server 220 may receive the change requirement based on providing a user interface for the user to input the change requirement. Additionally, or alternatively, host server 220 may receive the change requirement based on receiving information and processing the information to determine a change requirement). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for predicting a total man-hour effort of a project (e.g., based on similar historical trends/patterns of the project) of the invention of Cantor et al. and Meharwade et al. to further specify wherein the man-hour input plan is edited by a user (e.g., using a GUI) of the invention of Jagannathan et al. because doing so would allow the system to provide a user interface for receiving project information such as a new change requirement and generate a modified project plan (see Jagannathan et al., Paragraphs 0019 & 0021). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cantor et al. (US 2013/0325763 A1), in view of Meharwade et al. (US 2019/0122153 A1), in further view of Torgerson et al. (US 20060041855 A1). Regarding claims 7 and 14 (Currently Amended), which are dependent of claims 1 and 8, the combination of Cantor et al. and Meharwade et al. discloses all the limitations in claims 1 and 8. Cantor et al. further discloses wherein the at least one processor is to: provide … to change ideal time-series data …, thereby generating the man-hour input plan (Paragraph 0034, FIG. 1 is a diagram illustrating a project estimation predictor in one embodiment of the present disclosure. In one embodiment, a probability distribution of completion times for a project may be predicted based on estimating a probability distribution of effort for the tasks in the project. Here "effort" refers to the total work time required for completing a task. Effort is usually measured in man-hours and is different from duration, which may include time during which nobody is working on the task; Paragraph 0042, The task estimator 108 in one embodiment may use a learning algorithm over attributes of the input completed tasks to discover which task attributes, of those available, are significant in the estimation of task effort. In one embodiment of the present disclosure, a task estimation model 114 may comprise the output of such a learning algorithm. Using the learned information, the task estimator 108 may determine and output a probability distribution of effort for the input as-yet incomplete tasks. Optionally, a categorization of tasks based on the similarity of the effort or time required to complete them may be produced as output; Paragraph 0044, The learning algorithm of the present disclosure in this embodiment may discover (learn) significant attributes and relationships. The learning algorithm provides a different methodology from methods that rely on a fixed set of attributes and with respect to methods that rely on fixed relationships of attributes to effort. The methodology of the present disclosure need not have dependence on any specific attributes; Paragraph 0045, The learning algorithm operates on the input completed tasks and their attributes, for example, as follows. The learning algorithm may comprise selecting one or more subsets of the tasks. For each subset of tasks, the learning algorithm may select a subset of the attributes of the included tasks. For each subset of tasks, and for each associated subset of attributes of those tasks, the learning algorithm may use the attribute values to make an estimate of the effort of the tasks which have those attribute values. The above steps may be repeated for various subsets of tasks and various subsets of attributes of those tasks; Paragraph 0075, The task estimator 108 of the present disclosure may consider other kinds of information in addition to task attributes. Such other information may be received as input. For instance, the information input to the methodology of the present disclosure and considered in the learning may include information derived or computed from one or more task attributes. The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values; Paragraph 0151, The following illustrates example patterns that may be identified in the present disclosure for diagnoses, for instance, as issues that threaten on-time delivery: feature burn-up; repeatedly rescheduled (blocking) feature; scope creep; repeatedly rescheduled item; schedule delayed; schedule advanced; seasonal event; and/or shifted capacity, work effort has shifted from one area to another (e.g., from development to support or vice versa); Examiner notes that the initial estimated effort (e.g., man-hour effort) can be adjusted based on detected patterns similar to historical patterns). Although Cantor et al. discloses to generate a new man-hour input plan in response to detecting changes/patterns in the initial man-hour input plan (e.g. shifted capacity, work effort has shifted from one area to another), Cantor et al. does not specifically disclose wherein the man-hour input plan is edited by a user using a GUI. However, Torgerson et al. discloses wherein the at least one processor is to: provide, via a graphical user interface (GUI), an interface which allows a user to change ideal time-series data, thereby generating the man-hour input plan (Paragraph 0016, Also shown in FIG. 1 is a supply-and-demand database 36 that stores timesheet data and other information pertaining to time expended by, and/or potentially expendable by, software developers working for the enterprise. A supply-and-demand application 38 provides time-related information available to the system 20 for use in generating time estimates in connection with work efforts; Paragraph 0040, When the user activates an activator 420, the component information is saved, e.g., in the database 44 (see FIG. 1), and the GUI may display a screen 450 as shown in FIG. 8. Using the screen 450, the user may estimate a time for meeting the selected component specification. In the present configuration, an estimate is determined by summing a plurality of base hours multiplied by complexity factors as further described below. A "Calc Estimate" area 454 displays a calculated estimate based on complexities and base hours for a component type 458. An "Overridden Estimate" area 460 indicates an estimate that has been overridden by the user. The user may enter an estimate that overrides the overridden estimate when an estimate is calculated in a "New Override" field 464. A "Use Override?" checkbox 468 may be used to indicate whether the override or calculated estimate is to be used; Paragraph 0042, When the link 478 is activated, an "Update Team Base Hours" screen 500 may be displayed as shown in FIG. 9. A user may change an hours estimate for any of a plurality of stages of software development for the specified component type 458). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for predicting a total man-hour effort of a project (e.g., based on similar historical trends/patterns of the project) of the invention of Cantor et al. and Meharwade et al. to further specify wherein the man-hour input plan is edited by a user (e.g., using a GUI) of the invention of Torgerson et al. because doing so would allow the user to enter an estimate that overrides the previous estimate (see Torgerson et al., Paragraph 0040). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Amit (US 2022/0122025 A1) – discloses effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time. An estimate for a new project can be typically derived by considering characteristics of the new project as well as characteristics of previous similar projects (see at least Paragraph 0002). Jiang (Jiang, Z. and Naude, P., 2007. An examination of the factors influencing software development effort. International Journal of Computer Information and Systems Science and Engineering, 1(3), pp.182-191) - discloses some guide as to the possible factors significant to development effort, including project size, average team size, development language and so on (see at least Page 933, Table 1). Mori Keiji (JP 2010224889) - discloses each record of the Gantt chart includes a project member (hereinafter, also simply referred to as “person in charge”) that should execute the task, and the man-hours that are assumed as the amount of work that the person in charge must perform before the task is completed. (Hereinafter referred to as “scheduled man-hour” as appropriate) and the progress rate of the scheduled man-hour are recorded in association with each other. The unit of the planned man-hour in the present embodiment is “person day” indicating the work amount for one day by one person in charge (see at least Page 2). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.P./Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 13, 2024
Application Filed
Dec 17, 2025
Non-Final Rejection — §101, §103
Feb 16, 2026
Response Filed
Mar 11, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month