DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/26/26 has been entered, in which Applicant presented additional arguments for consideration. Claims 1-20 are pending in the present application and are under examination on the merit.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s arguments are acknowledged.
The 35 USC 101 rejections of claims 1-20 are maintained in light of Applicant’s explanations.
The 35 USC 103 rejections of claims 1-20 are maintained in light of Applicant’s explanations.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained. Examiner formulates an abstract idea analysis, following the framework described in the MPEP as follows:
Step 1: The claims are directed to a statutory category, namely a "method" (claims 8-14) and "system" (claims 1-7 and 15-20).
Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1:
merge predicted task time durations and task history data, wherein the predicted task time durations are based on a machine learning model;
calculate a difference between a start time and an end time for each of one or more completed tasks to calculate a duration for each of the one or more completed tasks;
calculate one or more error metrics based, at least in part, on the duration of each of the one or more completed tasks and a corresponding predicted task time duration for each of the one or more completed tasks;
average the one or more error metrics over a period of time and store the averaged one or more error metrics as error metric data;
determine whether the averaged one or more error metrics exceed a threshold
in response to determining that the threshold has not been exceeded, continue use of the machine learning model to predict task time durations
in response to determining that the threshold has been exceeded, retrain the machine learning model via a model training loop.
Independent claims 8 and 15 recite substantially similar claim language.
Dependent claims 2-7, 9-14, and 16-20 recite the same or similar abstract idea(s) as independent claims 1, 8, and 15 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea.
The limitations in claims 1-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of:
"Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior;
Step 2A - Prong 2: Claims 1-20 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of:
" A system for determining machine learning model retraining, comprising: a computer, comprising a processor and a memory, the computer configured to: / A computer-implemented method for determining machine learning model retraining, comprising: / A non-transitory computer-readable medium embodied with software for determining machine learning model retraining, the software when executed is configured to:" (claims 1 8, and 15) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "system" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application;
“instruct automated machinery to obtain items from inventory and packing the items in a configuration for shipment,” (1, 8, and 15);“wherein the model training loop is based, at least in part, on task history data received in real time from a warehouse management system,” (claims 4, 11, and 18); "wherein the computer is further configured to: store previous machine learning models in a machine learning model directory " (claims 8 and 14), however the sending and receiving of data from these various sources is merely insignificant extra-solution activity, e.g. data gathering, and/or merely an attempt at limiting the abstract idea to a particular field of use and thus fails to integrate the recited abstract idea into a practical application (e.g. MPEP 2106.0S(h): "Examiners should keep in mind that this consideration overlaps with other considerations, particularly insignificant extra-solution activity (see MPEP § 2106.05{g)). For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017} (limiting use of abstract idea to use with XML tags).");
Step 2B: Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of analysis using a "system" and "machine learning", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained.
Claims 1-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis
For further authority and guidance, see:
MPEP § 2106
https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2013/0325763 to Cantor et al. (hereafter referred to as Cantor) in view of U.S. Patent Application Publication Number 2016/0078361 to Brueckner et al. (hereafter referred to as Brueckner) and in further view of U.S. Patent Number 11030574 to Grande et al. (hereafter referred to as Grande).
As per claim 1, Cantor teaches:
A system for determining machine learning model retraining, comprising: a computer, comprising a processor and a memory, the computer configured to: (Paragraph Number [0209] teaches the components of computer system may include, but are not limited to, one or more processors or processing units 12, a system memory 16, and a bus 14 that couples various system components including system memory 16 to processor 12. The processor 12 may include a module 10 that performs the methods described herein. The module 10 may be programmed into the integrated circuits of the processor 12, or loaded from memory 16, storage device 18, or network 24 or combinations thereof).
merge predicted task time durations and task history data, wherein the predicted task time durations are based on a machine learning model (Paragraph Number [0107] teaches this analysis may be partially based on information about the estimates and the likelihood of certain tasks taking certain lengths of time as well as experience with how those estimates have gone in the past. Paragraph Number [0194] teaches a full set of capabilities were described above with an example scenario. The probability prediction described above, for example, may take as input, the plan items. A triangular distribution of the time it takes to complete the work on those plan items may be produced by an estimator. Plan item dependencies may be also provided as input. As the project is going along, how the actuals (actual times it took to complete work) compare to those estimates can be determined. A Monte Carlo simulation may be run to compute the expected range of the whole schedule, for example, subject to constraints such as the number of team members available for work, and e.g., task parallelism).
calculate a difference between a start time and an end time for each of one or more completed tasks to calculate a duration for each of the one or more completed tasks (Paragraph Number [0039] teaches the above methodology provides a logical chain from the effort of individual tasks to the completion time of the project. In one embodiment of the present disclosure, both tasks from the project to be estimated and tasks from other projects may be considered. In one embodiment of the present disclosure, attributes of tasks play a role in determining the estimates. The resulting estimates of task effort and project time are provided in the form of probability distributions. Paragraph Number [0041] teaches the task estimator 108 may also take as input the author, date and time of any work performed on the task. The task estimator 108 in considering the one or more completed tasks, if any, for the project to be estimated, may specifically consider the date and time of work performed on the completed tasks. The task estimator 108 in considering the one or more completed tasks, if any, not belonging to the project to be estimated, may specifically consider the date and time of work performed on that completed task. In addition to the date and time of worked performed, the effort estimator 108 may also take into consideration the "state" of the task at different points in time).
calculate one or more error metrics based, at least in part, on the duration of each of the one or more completed tasks and a corresponding predicted task time duration for each of the one or more completed tasks (Paragraph Number [0043] teaches the task estimator 108 functionality may be repeated one or more times during the course of the execution of a project, each repetition of the process may take different input data, and each repetition of the process may produce different results, including possibly different task estimation models, different estimates of the distribution of effort for each task, different categorizations of tasks, and different sets of attributes associated with categories of task. Paragraph Number [0046] teaches the learning algorithm may also comprise evaluating the accuracy of the various estimates of task effort produced for alternative subsets of tasks and alternative subsets of attributes. Based on the evaluation, the learning algorithm may determine the particular subsets of tasks and attributes that lead to the best overall prediction of the effort. Paragraph Number [0189] teaches if the expert assessment methodology discovers by the end of the next iteration that the team did not actually burn off 20 story points, it can then mark this expert assessment "invalid" and inactivate it, and it will no longer be used in computing the probability of on-time completion. If it turns out that the expert assessment correctly predicted what would happen, it will be marked "valid" and it will be expired at the end of the period to which the technical lead indicated that it applies, as it will no longer be needed--the data itself will cause the correct probability computations to occur. But the technical lead will be able to use his successful, validated expert assessment in the future, to help support future expert assessments he offers in other situations where he believes he knows something more than the data is showing. And it will help people to trust his judgment on that).
determine whether the averaged one or more error metrics exceed a threshold (Paragraph Number [0152] teaches parameters in general may have configurable parameters for factors like scope, threshold levels, and other factors. Different patterns may be associated with different kinds of detailed information, e.g., number of times rescheduled, or amount of time past due, or degree of increased risk, or amount of scope creep, etc. Paragraph Number [0076] teaches given the work required in a development project, specified as a set of tasks, a methodology in one embodiment of the present disclosure predicts when the project is likely to deliver. The methodology in one embodiment reasons about an uncertain future entity: the delivery date. The project delivery date is uncertain because it depends on a number of events whose occurrence cannot be known for sure, such as the completion of subtasks, the successful integration of components, etc. One can only take imprecise or incomplete measurements of such events. Thus, instead of modeling a single future delivery date, the methodology of the present disclosure in one embodiment treats the delivery date as a range of dates, together with a probability function that provides the likelihood of delivering on each day in the range. Modeling the delivery date in this fashion, as a probability distribution, enables the reasoning about the likelihood of delivery by a certain date. (Examiner asserts that meeting a delivery date for a deliverable constitutes meeting a threshold)).
in response to determining that the threshold has not been exceeded, continue use of the machine learning model to predict task time durations (Paragraph Number [0088] teaches as the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks. Paragraph Number [0105] teaches the probability distribution of an estimated effort needed to complete each of the unfinished tasks may be determined (e.g., at 806) using machine learning, which learns from available data associated with the completed tasks. The learning may be then applied to the unfinished tasks to estimate how long those unfinished tasks will task).
in response to determining that the threshold has been exceeded, retrain the machine learning model via a model training loop (Paragraph Number [0089] teaches assume a scenario where tasks are either enhancements or defects. Consider a training set having 10 completed tasks: 5 enhancement tasks that each took 2 days and 5 defect tasks that each took 1 day. From this training set of already-completed tasks, the machine learner might build a model that contextualizes its prediction depending on the type of task it is given. In this case, the model may simply encode that enhancements usually take 2 days and defects 1 day. This model can now be applied to new tasks: if the new task is an enhancement, the model predicts 2 days, if it is a defect the prediction is 1 day. Clearly, this is an over-simplification: a real training set will not be as simple and bipolar, where different types of tasks always take exactly the same amount of time. To handle a more diverse (and realistic) training set, the machine learner may need to use a variety of attributes of the task (such as owner, task type, description, priority) in order to discriminate the elements in the training set to determine which ones are most similar to a new piece of work. Paragraph Number [0090] teaches with many machine learning techniques, there is a tendency to overfit, which means that the technique will treat the training data as more representative of new data than it really is. To compensate for this tendency, the machine learner in one embodiment of the present disclosure builds a series of models on different training sets, as illustrated in FIG. 5. For example, the machine learner may use multiple training sets instead of one training set. Each training set gives rise to a different model. All models may be applied simultaneously to obtain a plurality of discrete (single-valued) estimates. Each model produces an estimate, the plurality of estimates forms the distribution of estimates).
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
average the one or more error metrics over a period of time and store the averaged one or more error metrics as error metric data (Paragraph Number [0093] teaches a variety of different statistics may be obtained in either phase. For numeric variables, basic statistics 765 may include the mean, median, minimum, maximum, and standard deviation. Numeric variables may also be binned (categorized into a set of ranges such as quartiles or quintiles); such bins 767 may be used for the construction of histograms that may be displayed to the client. Depending on the nature of the distribution of the variable, either linear or logarithmic bin boundaries may be selected. In some embodiments, correlations 768 between different variables may be computed as well. In at least one embodiment, the MLS may utilize the automatically generated statistics (such as the correlation values) to identify candidate groups 769 of variables that may have greater predictive power than others. (See also Paragraph Number [0116])).
Both Cantor and Brueckner are directed to generating machine learning models. Cantor discloses determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained. Brueckner improves upon Cantor by disclosing determining error metrics based upon time measurements, filtering the data, and using decision trees. One of ordinary skill in the art would be motivated to further include determining error metrics based upon time measurements, filtering the data, and using decision trees, to efficiently improve upon the data inputs used to train the machine learning data. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained in Cantor to further utilize determining error metrics based upon time measurements, filtering the data, and using decision trees as disclosed in Brueckner, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach instruct automated machinery to obtain items from inventory and pack the items in a configuration for shipment which is taught by the following citations from Grande:
instruct automated machinery to obtain items from inventory and packing the items in a configuration for shipment (Col. 9 lines 16-44 teach one or more computers 160 associated with supply chain network 100 may instruct automated machinery (i.e., robotic warehouse systems, robotic inventory systems, automated guided vehicles, mobile racking units, automated robotic production machinery, robotic devices and the like) to adjust product mix ratios, inventory levels at various stocking points, production of products of manufacturing equipment, proportional or alternative sourcing of one or more supply chain entities 150, and the configuration and quantity of packaging and shipping of items based on one or more product assortments created in retail planner 110, current inventory or production levels, and/or one or more other factors described herein, and/or. Inventory data 228 may comprise current or projected inventory quantities or states, the current level of inventory for products at one or more stocking points across the supply chain network 100, order rules that describe one or more rules or limits on setting an inventory policy, including, but not limited to, a minimum order quantity, a maximum order quantity, a discount, a step-size order quantity, and batch quantity rules. According to some embodiments, retail planner 110 accesses and stores inventory data 228 in database 114, which may be used by retail planner 110 to place orders, set inventory levels at one or more stocking points, initiate manufacturing of one or more products, or the like. In addition, or as an alternative, inventory data 228 may be updated by receiving current item quantities, mappings, or locations from the one or more imaging devices 120, inventory system 130, and/or transportation network 140).
Both the combination of Cantor and Brueckner and Grande are directed to generating machine learning models. The combination of Cantor and Brueckner discloses determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained. Grande improves upon the combination of Cantor and Brueckner by disclosing instruct automated machinery to obtain items from inventory and pack the items in a configuration for shipment. One of ordinary skill in the art would be motivated to further include instruct automated machinery to obtain items from inventory and pack the items in a configuration for shipment, to efficiently implement the machine learning algorithm into an applied form. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained in the combination of Cantor and Brueckner to further utilize instruct automated machinery to obtain items from inventory and pack the items in a configuration for shipment as disclosed in Grande, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 8, Cantor teaches:
A computer-implemented method for determining machine learning model retraining, comprising: (Paragraph Number [0209] teaches the components of computer system may include, but are not limited to, one or more processors or processing units 12, a system memory 16, and a bus 14 that couples various system components including system memory 16 to processor 12. The processor 12 may include a module 10 that performs the methods described herein. The module 10 may be programmed into the integrated circuits of the processor 12, or loaded from memory 16, storage device 18, or network 24 or combinations thereof).
The remainder of the claim limitation are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claim 15, Cantor teaches:
A non-transitory computer-readable medium embodied with software for determining machine learning model retraining, the software when executed is configured to: (Paragraph Number [0212] teaches system memory 16 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory or others. Computer system may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 18 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (e.g., a "hard drive"). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 14 by one or more data media interfaces).
The remainder of the claim limitation are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claims 2, 9, and 16, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1, 8, and 15 respectively.
In addition, Cantor teaches:
to enable a direct comparison between the task history data and goal time data (Paragraph Number [0088] teaches as the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks. Paragraph Number [0105] teaches the probability distribution of an estimated effort needed to complete each of the unfinished tasks may be determined (e.g., at 806) using machine learning, which learns from available data associated with the completed tasks. The learning may be then applied to the unfinished tasks to estimate how long those unfinished tasks will task).
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
filter and clean the task history data (Paragraph Number [0146] teaches an MLS request handler 180 may receive a record extraction request 2310 indicating a sequence of filtering operations that are to be performed on a specified data set located at one or more data sources, such as some combination of shuffling, splitting, sampling, partitioning (e.g., for parallel computations such as map-reduce computations, or for model training operations/sessions that overlap with each other in time and may overlap with each other in the training sets used), and the like. A filtering plan generator 2380 may generate a chunk mapping of the specified data set, and a plurality of jobs to accomplish the requested sequence of filtering operations (either at the chunk level, the record level, or both levels) in the depicted embodiment, and insert the jobs in one or more MLS job queues 142. Paragraph Number [0173] teaches it is noted that a similar approach towards consistency or repeatability may be taken for other types of input filtering operations, such as sampling or shuffling, in at least some embodiments. For example, in one embodiment, a client may wish to ensure shuffle repeatability (i.e., that the results of one shuffle request can be re-obtained if a second shuffle request with the same input data and same request parameters is made later) or sample repeatability (i.e., that the same observation records or chunks are retrievable from a data set as a result of repeated sample requests) (See also Paragraph Numbers [0130] and [0140])).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 3, 10, and 17, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1, 8, and 15 respectively.
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
wherein the one or more error metrics comprise one or more of: one or more root mean square error calculations, one or more mean absolute percentage error calculations and one or more mean absolute error calculations (Paragraph Number [0155] teaches a variety of measures 2630 of the accuracy or quality may be obtained in different embodiments, depending on the type of model being used—e.g., the root mean square error (RMSE) or root mean square deviation (RMSD) may be computed for linear regression models, the ratio of the sum of true positives and true negatives to the size of the test set may be computed for binary classification problems, and so on).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 4, 11, and 18, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1, 8, and 15 respectively.
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
wherein the model training loop is based, at least in part, on task history data received in real time from a warehouse management system (Paragraph Number [0089] teaches results of model executions, such as predictions 608 (values predicted by a model for a dependent variable in a scenario in which the actual values of the independent variable are not known) and model evaluations 610 (measures of the accuracy of a model, computed when the predictions of the model can be compared to known values of dependent variables) may also be stored as artifacts by the MLS in some embodiments. In addition to the artifact types illustrated in FIG. 6, other artifact types may also be supported in some embodiments—e.g., objects representing network endpoints that can be used for real-time model execution on streaming data (as opposed to batch-mode execution on a static set of data) may be stored as artifacts in some embodiments, and client session logs (e.g., recordings of all the interactions between a client and the MLS during a given session) may be stored as artifacts in other embodiments).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 5, 12, and 19, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1, 8, and 15 respectively.
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
wherein the predicted task time durations are tailored to one or more individual tasks of one or more individual locations (Paragraph Number [0124] teaches an MLS client 164 may submit a recipe execution request 1601 that includes parameter auto-tune settings 1606. For example, the client 164 may indicate that the bin sizes/boundaries for quantile binning of one or more variables in the input data should be chosen by the service, or that the number of words in an n-gram should be chosen by the service. Parameter exploration and/or auto-tuning may be requested for various clustering-related parameters in some embodiments, such as the number of clusters into which a given data set should be classified, the cluster boundary thresholds (e.g., how far apart two geographical locations can be to be considered part of a set of “nearby” locations), and so on. Paragraph Number [0147] teaches examples constituent elements of a record extraction request that may be submitted by a client using a programmatic interface of an I/O (input-output) library implemented by a machine learning service, according to at least some embodiments. As shown, observation record (OR) extraction request 2401 may include a source data set indicator 2402 specifying the location(s) or address(es) from which the input data set is to be retrieved. Paragraph Number [0154] teaches after the first filtering operation of the sequence is performed in memory at the MLS servers, the remaining filtering operations (if any) may be performed in place in the depicted embodiment, e.g., without copying the chunks to persistent storage or re-reading the chunks for their original source locations (element 2519)).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 6, 13, and 20, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1, 8, and 15 respectively.
Cantor teaches determining if a machine learning algorithm is accurate enough to be useful or if it needs to be retrained, but does not explicitly teach determining error metrics based upon time measurements, filtering the data, and using decision trees which is taught by the following citations from Brueckner:
wherein the machine learning model comprises a decision tree (Paragraph Number [0179] teaches a number of machine learning methodologies, for example techniques used for classification and regression problems, may involve the use of decision trees. FIG. 33 illustrates an example of a decision tree that may be generated for predictions at a machine learning service).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 7 and 14, the combination of Cantor, Brueckner, and Grande teaches each of the limitations of claims 1 and 8 respectively.
In addition, Cantor teaches:
wherein the computer is further configured to: store previous machine learning models in a machine learning model directory (Paragraph Number [0202] teaches referring to FIG. 12, the middle chart labeled "Delivery Date Risk Trend" 1212 shows how the predicted "Likelihood of Delivery" has changed over time. As the timeline of the project progresses, a methodology of the present disclosure calculates predictions of completion dates based on information available at the time and those predictions are stored)
Response to Arguments
Applicant’s arguments filed 1/26/2026 have been fully considered but they are not persuasive.
Applicant argues that the claims do not recite an abstract idea. (See Applicant’s Remarks, 1/26/2026, pgs. 7-9). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations or to further implement abstract concepts that further organize human activities (i.e. humans completing tasks). The claims do not recite any particular use for these determinations that improve upon the underlying computer technology. Instead, Examiner asserts that the claim language is only used as implementation of the abstract concepts utilizing technology. The claims are not directed towards the technology, but are instead directed towards the overarching abstract concepts and in this way is generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims nor do the claims recite significantly more than the underlying abstract concepts.
Applicant argues that the claims are not taught by the combination of cited references. See Applicant’s Remarks, 1/26/2026, pgs. 9-12). Specifically, Applicant argues that the Cantor reference does not teach the limitation “calculate one or more error metrics based, at least in part, on the duration of each of the one or more completed tasks and a corresponding predicted task time duration for each of the one or more completed tasks,” Examiner respectfully disagrees. The following citation from Cantor is applicable:
Paragraph Number [0043] teaches the task estimator 108 functionality may be repeated one or more times during the course of the execution of a project, each repetition of the process may take different input data, and each repetition of the process may produce different results, including possibly different task estimation models, different estimates of the distribution of effort for each task, different categorizations of tasks, and different sets of attributes associated with categories of task. Paragraph Number [0046] teaches the learning algorithm may also comprise evaluating the accuracy of the various estimates of task effort produced for alternative subsets of tasks and alternative subsets of attributes. Based on the evaluation, the learning algorithm may determine the particular subsets of tasks and attributes that lead to the best overall prediction of the effort. Paragraph Number [0189] teaches if the expert assessment methodology discovers by the end of the next iteration that the team did not actually burn off 20 story points, it can then mark this expert assessment "invalid" and inactivate it, and it will no longer be used in computing the probability of on-time completion. If it turns out that the expert assessment correctly predicted what would happen, it will be marked "valid" and it will be expired at the end of the period to which the technical lead indicated that it applies, as it will no longer be needed--the data itself will cause the correct probability computations to occur. But the technical lead will be able to use his successful, validated expert assessment in the future, to help support future expert assessments he offers in other situations where he believes he knows something more than the data is showing. And it will help people to trust his judgment on that. Paragraph Number [0152] teaches parameters in general may have configurable parameters for factors like scope, threshold levels, and other factors. Different patterns may be associated with different kinds of detailed information, e.g., number of times rescheduled, or amount of time past due, or degree of increased risk, or amount of scope creep, etc. Paragraph Number [0076] teaches given the work required in a development project, specified as a set of tasks, a methodology in one embodiment of the present disclosure predicts when the project is likely to deliver. The methodology in one embodiment reasons about an uncertain future entity: the delivery date. The project delivery date is uncertain because it depends on a number of events whose occurrence cannot be known for sure, such as the completion of subtasks, the successful integration of components, etc. One can only take imprecise or incomplete measurements of such events. Thus, instead of modeling a single future delivery date, the methodology of the present disclosure in one embodiment treats the delivery date as a range of dates, together with a probability function that provides the likelihood of delivering on each day in the range. Modeling the delivery date in this fashion, as a probability distribution, enables the reasoning about the likelihood of delivery by a certain date. (Examiner asserts that meeting a delivery date for a deliverable constitutes meeting a threshold)
Examiner asserts that the above emphasized portions on the Cantor reference teach determining validity of a specific task (i.e. error metric) as well as teaching delivery dates and date ranges of specific tasks (i.e. duration of tasks and a predicted time duration). As such, Examiner asserts that the Cantor reference does read on the claim limitation cited above. Examiner is not persuaded by the distinctions Applicant is attempting to make.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H DIVELBISS whose telephone number is (571)270-0166. The examiner can normally be reached on 7:30 am - 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/M. H. D./
Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624