Prosecution Insights
Last updated: April 19, 2026
Application No. 18/583,070

System and Method to Predict Service Level Failure in Supply Chains

Final Rejection §101§103
Filed
Feb 21, 2024
Examiner
CHEIN, ALLEN C
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Blue Yonder Group Inc.
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
189 granted / 429 resolved
-7.9% vs TC avg
Strong +40% interview lift
Without
With
+40.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
39 currently pending
Career history
468
Total Applications
across all art units

Statute-Specific Performance

§101
28.3%
-11.7% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Claims Claims 1,8,15 are amended. Claims 1-20 are pending The rejection under 35 USC 101 is maintained. Response to Applicant Remarks Applicant’s well-articulated remarks have been considered but are unpersuasive for the reasons below. Regarding the rejection under 35 USC 101, Applicant argues that the claimed invention is a practical application of an abstract idea, citing the Desjardins decision. Applicant’s 2/13/26 remarks, p. 10). The examiner respectfully disagrees. The examiner notes that in Ex Parte Desjardins, the claimed machine learning training method reduced storage requirements and preserved task performance across sequential training. The ARP characterized this as an improvement “in training the machine learning model itself,” not merely an abstract algorithm implemented on a generic computer. In contrast, the examiner does not discern a comparable improvement to the technology of machine learning in Applicant’s invention. That is, Applicant’s invention appears to take historical data, train a model, make predictions with the model, display results, and use new data to improve the model, in order to solve the business problem of detecting supply chain failures. The machine learning steps appear to be a common paradigm in the application of machine learning to solve various problems rather than an improvement to machine learning itself. Applicant’s amendments are believed to be taught by the Najmi reference. Najmi discloses that a self-learning system could monitoring performance in subsequent learning cycles to improve performance. (Najmi, para 0058, “[0058] As shown above, self-learning supply chain system 110 executes a process that redefines a supply chain management problem from generating optimal plans to that of driving optimal performance using supply chain plans as a control signal and a plurality of monitored KPIs that provide closed loop feedback. In an embodiment, self-learning system 110 couples each PDCA cycle to a learning cycle process 502. That is, self-learning system 110 measures the performance of one or more supply chain entities 120 in the supply chain. As an example only and not by way of limitation, in an example where KPI is measuring inventory of an item at one or more supply chain entities 120, and the planner wishes to adjust the inventory of the item, self-learning system 110 determines whether or not satisfactory inventory levels are being achieved by displaying to the planner the levels of the item at the one or more supply chain entities 120 likely to be achieved and actually achieved by each action taken by the planner.”) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding independent claims 1,8,15 the claimed invention recites an abstract idea without significantly more. The claims recites the abstract idea of predicting and correcting supply chain issues which is a mental process. Other than reciting a computer, a prediction model and interactive element nothing in the claims precludes the steps from being performed mentally. But for the computer and model and element the limitations on receive historical data, prepare data, train model, predict event, calculate precision and recall, generate visualization, filter alert, provide tool for corrective action, use newer data to train to improve performance is a process that under its broadest reasonable interpretation could be performed by mentally but for the recitation of generic computer elements. If claim limitations, under the broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. To the extent the claims recite machine learning (ie. a using and training “predictive model”), the features are claimed at a high level of generality and do not appear to represent an improvement to machine learning so much as applying an abstract idea using generic computing elements and/or employing mathematical concepts to train an AI model. Further the above limitations related to predicting and correcting supply chain stripped of the identified additional and insignificant elements could also be considered a “Method of Organizing Human Activity” relating to the managing human behavior and interactions. (“Fundamental Economic Practice”). Thus, the claims recite an abstract idea. The judicial exception is not integrated into a practical application. The computers are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer components. The additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Simply implementing the abstract idea on a generic computer environment is not a practical application of the abstract idea and does not take the claim out of the mental process or method of organizing human activity grouping. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional element of a computer, model and interactive element amounts to no more than mere instructions to apply the exception using a generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Collecting, analyzing and displaying information, and receiving and transmitting over a network are conventional in the computing arts. (MPEP 2106.05h; See also MPEP 2106.05, Alice v. CLS, “. Nearly every computer will include a ‘communications controller’ and ‘data storage unit’ capable of performing the basic calculation, storage, and transmission functions required by the method claims.”).] The claims are not patent eligible. Regarding the dependent claims, these claims are directed to limitations which serve to limit the supply chain prediction and correction steps. The subject matter of claims 2/9/16 (aggregate variables, determine quantities, compute KPI), 3/10/17 (time to enact corrective action), 4/11/18 (production and transportation alert), 5/12/19 ( display contributive factors), 6/13/20 (identifying needed adjustment), 7/14 (waterfall chart) appear to add additional steps to the abstract idea, implemented by generic computers. These claims neither introduce a new abstract idea nor additional limitations which are significantly more than an abstract idea. They provide descriptive details that offer helpful context, but have no impact on statutory subject matter eligibility. Therefore the limitations on the invention, when viewed individually and in ordered combination are directed to in-eligible subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6,8-13,15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Najmi 20160217406 in view of Fan, “Machine Learning Classification Model Evaluation Metrics”, 2/2018, https://shuzhanfan.github.io/2018/02/model-evaluation-metrics/ a computer comprising a processor and a memory, the computer configured to: receive historical supply chain data from an archiving system, the archiving system storing historical supply chain data from a supply chain network comprising one or more supply chain entities; prepare the historical supply chain data for a prediction problem, Najmi is directed to a system for learning and correcting supply chain problems. (Najmi, summary, “A system for supply chain performance optimization is disclosed. The system includes a supply chain planning database that receives supply chain data from a transaction system, and communicates the supply chain data to a planning model engine and a risks and assumptions repository that receives supply chain data from the transaction system, and communicates updated supply chain assumptions to the supply chain planning database. The system also includes a persistent problems and work order management repository that communicates supply chain problems and supply chain problems resolutions with the planning model engine, a root cause diagnostic library tangibly that communicates one or more performance deviations with the planning model engine and a planning levers library that determines at least one corrective action to resolve the one or more performance deviations.”) wherein the prediction problem comprises a classification problem; ( Najmi, para 0030, “[0030] For example, the problems in the supply chain inputs may include, but are not limited to, new unforecasted orders, new orders, changes to existing orders or forecasts, changes to in-transit shipments, changes to work in progress or work in process, changes in inventory, new capacity, reduced capacity, changes to external supply, and the like. In addition, according to one example, these problems may be classified into categories such as, for example, supply changes, inventory changes, capacity changes, demand changes, and the like. Although example categories of problems are described, embodiments contemplate any type of disruptions, plan problems, perturbations, changes, events, or categories of disruptions, perturbations, changes, and/or events, according to particular needs. In this document, the terms “disruptions,” “problems,” “perturbations,” “changes,” or “events” may refer to any positive or negative deviation, condition, pattern, or occurrence within the supply chain plan or during execution of the plan that can motivate action by a supply chain planner.”) train a prediction model to solve the prediction problem based on, at least part, of the prepared historical supply chain data; (Najimi, para 0072, “In some embodiments, self-learning system 110 persists and mines historical data. In some other, self-learning system 110 uses external data to supplement historical data. In addition, or as an alternative, self-learning system 110 monitors those assumptions and attempts to confirm that those assumptions remain true. If any of the assumptions are false, the supply chain plans generated by the planning model stored in risks and assumptions repository 230 or supply chain planning database 220 will be out of date and inaccurate and self-learning system 110 may adjust the assumptions to correspond to the true value.”) predict whether one or more supply chain events will occur during a prediction horizon, the one or more supply chain events associated with at least one supply chain entity of the one or more supply chain entities; Najmi, para 0062, “[0062] FIG. 6B illustrates a performance analysis with KPI monitoring loop 602 of a self-learning system 110. In contrast to the reactive planning with post mortems of FIG. 4B, performance analysis with KPI monitoring loop 602 incorporates automatic KPI monitoring 614 and report generation 616 directly back into a performance analysis 620 during each iteration of a supply chain plan or at any specified time period. Traditional post mortems 422 are conducted sporadically and reactively only after an unfavorable outcome. In other words, self-learning system 110 expects a post mortem and performs one every time self-learning system 110 generates a supply chain plan such that automated KPI monitoring 614 is built into the process of planning and execution.”) generate a master visualization dashboard comprising one or more alerts for the predicted one or more supply chain events and further comprising a model performance visualization; (Najmi, para 0040, “[0040] Root cause diagnostic library 236 comprises a database that stores (1) supply chain performance dashboard data 358, (2) execution collaboration workflows 360, (3) automated plan review workflow 362, (4) plan explainer workflow 364, and (5) plan change analysis workflows 366. Self-learning system 110 provides a planner supply chain performance dashboards 701 by calculating and displaying “Performance to Plan” metrics for production, sales, and/or inventory. In some embodiments, supply chain performance dashboards 701 determine guided analysis paths for augmenting supply chain performance dashboards 701, which enable a planner to identify root causes by navigating from metrics (including top level metrics) to root causes of performance deviations. Self-learning system 110 determines and displays execution collaboration workflows 360 by monitoring and logging published plan execution, which may be overridden by self-learning system 110 prior to accepting the published plan for execution. In some embodiments, execution collaboration workflows 360 track the time, place, reason, and/or manner that published plans are overridden, validate and refine plan assumptions, and reduce complexity from published plan compliance analysis. Among other things, automated plan review workflows 362, plan explainers workflows 364, and plan change analysis workflows 366 increase the speed of reviewing, understanding, approving, and publishing plans. Root cause diagnostics library 236 is coupled with existing planning models and engines 212 with communication link 328, however, root cause diagnostics library 236 communicates with other components of self-learning system 110 and/or supply chain system 100, accordingly to particular needs. In some embodiments root cause diagnostics library 236, persistent problems and work order management repository 238, or both store data to be displayed by self-learning system 110 for supply chain performance monitoring with guided root cause analytics 306.” receive a selection of an interactive element of the master visualization dashboard to select or input filters to not display one or more alerts based on a criteria of the selected or input filter; and (Najmi, par 0081, “[0081] FIG. 10 (depicted as FIGS. 10A and 10B) illustrates a guided analysis path incorporating a fishbone path 1002. A fishbone path 1002, as displayed on the user workspace 1001, is similar to the fishbone chart 901 described above, but the fishbone path 1002 comprises several additional features. First, a fishbone path links a real demand problem, for example, a late order, via its bill of material to its real root cause, for example, factory work-order delays. Second, a fishbone path 1002 as displayed on the user workspace 1001 comprises an interactive interface to permit a supply chain planner to view data of one or more supply chain entities 120 in real time. As a planner uses the fishbone path 1002 to navigate through a root-cause analysis, as explained in connection with fishbone chart 901, data, workflows, work orders, and relevant documents may be presented to the planner in a window of the interface, so that the planner can make decisions based on real information and not simply assumptions.”) provide one or more tools for initiating one or more corrective actions to be undertaken in order to resolve one or more underlying causes of the displayed one or more alerts for the predicted one or more supply chain events. (Najmi, para 0083, “[0083] FIG. 11 is a diagram illustrating a plan for action management comprising detecting, triaging, analyzing, resolving and following up on problems during execution. In some embodiments, sensors are planted to monitor execution against a supply chain plan of one or more supply chain entities 120 to detect out of tolerance situations. In some embodiments, self-learning system 110 automatically triages detected problems based on the calculated impact to overall performance metrics. In other embodiments, self-learning system 110 determines individual problems to investigate. In some embodiments, planners use guided analytic paths to analyze root causes of a problem. In some embodiments, planners use corrective action levers and process playbooks to take and record corrective actions. In some embodiments, once a planner takes a set of corrective actions, follow up sensors are planted to monitor the results of the corrective action and confirm if desired outcomes are achieved. In some embodiments, the follow up sensors update task list alerts, action, and delegated direction and history log notes and action history.”) Use newer historical supply chain data to train the prediction model to improve a performance of the prediction model. (Najmi, para 0058, “[0058] As shown above, self-learning supply chain system 110 executes a process that redefines a supply chain management problem from generating optimal plans to that of driving optimal performance using supply chain plans as a control signal and a plurality of monitored KPIs that provide closed loop feedback. In an embodiment, self-learning system 110 couples each PDCA cycle to a learning cycle process 502. That is, self-learning system 110 measures the performance of one or more supply chain entities 120 in the supply chain. As an example only and not by way of limitation, in an example where KPI is measuring inventory of an item at one or more supply chain entities 120, and the planner wishes to adjust the inventory of the item, self-learning system 110 determines whether or not satisfactory inventory levels are being achieved by displaying to the planner the levels of the item at the one or more supply chain entities 120 likely to be achieved and actually achieved by each action taken by the planner.”) Najmi does not explicitly disclose calculate precision and recall scores for the prediction model, wherein the precision scores indicate a proportion of predicted supply chain events that actually occur, and wherein the recall scores indicate a proportion of supply chain events that occur will be predicted; Fan is an article discussing machine learning testing. (Fan, p1) Fan discloses that it is crucial to evaluate accuracy, precision and recall scores in the testing of a machine learning classifier model. (Fan, p.2). It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine Najmi with the testing of Fan with the motivation of evaluating a machine learning model. Id. Regarding Claim 2, Najmi and Fan disclose the system of claim 1. wherein the computer is further configured to prepare the historical supply chain data by: aggregate one or more variables at a same granularity level; determine all actual, current, or past quantities in terms of a ratio of an original quantity divided by a plan quantity; and Najmi, para 0071, “By way of a non-limiting example, if the supply chain plan assumes that the yield of some process is 95%, but the actual performance 610 indicates that the yield is actually 80%, the automated early warning sensors and key assumptions monitoring 660 indicates the assumption is invalid and issues an alert 662. In some embodiments, automated early warning sensors and key assumptions monitoring 660 also provides for monitoring the key assumptions and using statistical process control charts and the like to issue an early warning”) compute quantity independent key performance indicators. (Najmi, para 0035, “[0035] Self-learning system 110, and in particular, server 210, may store and/or access various rules and parameters 222, static master data 223, dynamic data 225, constraints 224, policies 226, and plan data 228, associated with one or more supply chain entities 120. As discussed above, self-learning system 110 may continuously adjust the supply chain plan to a state of feasibility and/or optimality due to disruptions in the supply chain by continually monitoring any type of data or KPIs using KPI monitors 216 or alerts 206 in order to update a plan as soon as data or KPIs received from supply chain entities 120 indicate that a disruption or plan problem has, will, or is likely to occur. Self-learning system 110 monitors data or KPIs by receiving such information from supply chain entities 120 and detecting out of range limits or patterns that indicate a supply chain plan problem using alerts 206 or KPI monitors 216.”) Regarding Claim 3, Najmi and Fan disclose the system of claim 1. wherein the prediction horizon comprises a length of time long enough for one or more supply chain entities affected by the predicted one or more supply chain events to enact a corrective action. (Najmi, para 0062, “[0062] FIG. 6B illustrates a performance analysis with KPI monitoring loop 602 of a self-learning system 110. In contrast to the reactive planning with post mortems of FIG. 4B, performance analysis with KPI monitoring loop 602 incorporates automatic KPI monitoring 614 and report generation 616 directly back into a performance analysis 620 during each iteration of a supply chain plan or at any specified time period. Traditional post mortems 422 are conducted sporadically and reactively only after an unfavorable outcome. In other words, self-learning system 110 expects a post mortem and performs one every time self-learning system 110 generates a supply chain plan such that automated KPI monitoring 614 is built into the process of planning and execution.”; para 0079, “[0079] In some embodiments, self-learning system 110 enables early detection of sources, or suspected sources, of risk and capitalizes on opportunities to start proactively detecting the sources to maximize available reaction time. For each problem that may arise in execution of a supply chain plan, self-learning system 110 looks at the earliest possible detection of the problem and places one or more sensors to monitor the likely sources for the problem. In some embodiments, this increases lead time available to respond to a problem.”) Regarding Claim 4, Najmi and Fan disclose the system of claim 1. wherein the one or more alerts for the predicted one or more supply chain events comprise production system alerts and transportation management system alerts. (Najmi, para 0043, “Other corrective actions include, for example, expending material in transport, increasing the priority for a manufacturing lot, utilizing material from a first order to fulfill a second order, marking down products, expediting transportation, adding overtime to increase capacity, and offloading work to alternate resources.”) Regarding Claim 5, Najmi and Fan disclose the system of claim 1. display on the master visualization dashboard a visualization of a contribution of one or more predictive factors to an overall precision score. See prior art rejection of claim 1 regarding Fan Regarding Claim 6, Najmi and Fan disclose the system of claim 5. wherein the visualization of the contribution of the one or more predictive factors provides identification of which supply chain systems need to be adjusted to avoid a service level failure. See prior art rejection of claim 1 regarding Najmi Regarding Claims 8-13,15-20 See prior art rejections of claims 1-6 Claims 7,14 are rejected under 35 U.S.C. 103 as being unpatentable over Najmi 20160217406 in view of Fan, “Machine Learning Classification Model Evaluation Metrics”, 2/2018, https://shuzhanfan.github.io/2018/02/model-evaluation-metrics/ in view of Ray, “How to create waterfall chart in Qlikview?”, 2015, https://www.analyticsvidhya.com/blog/2013/12/create-waterfall-chart-qlikview/ Regarding Claim 7, Najmi and Fan disclose the system of claim 6. Najmi does not explicitly disclose wherein the visualization of the contribution of the one or more predictive factors comprises a waterfall chart. Ray is an article discussing business reports. (Ray, p.1). Ray discloses that a waterfall chart is a well known visualizing for charting business data. (Ray, p.3, “The waterfall chart is a one of the finest examples of data visualization. This indicates how an initial / reference value increases / decreases by various factors and reaches the outcome. Waterfall charts are used widely in: Sales Analysis (Comparison b/w target Vs Actual, identify GAP) Financial Analysis (Profit and Loss) Inventory Analysis”) It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine Najmi and Fan with the report of Ray with the motivation of evaluating business performance. Id. Regarding Claim 14, See prior art rejection of claim 7 Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C CHEIN whose telephone number is (571)270-7985. The examiner can normally be reached Monday-Friday 8am -5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Florian Zeender can be reached at (571) 272-6790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C CHEIN/Primary Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Feb 21, 2024
Application Filed
Nov 07, 2025
Non-Final Rejection — §101, §103
Feb 13, 2026
Response Filed
Feb 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586084
DATA ANALYTICS TOOL
2y 5m to grant Granted Mar 24, 2026
Patent 12579512
OPTIMIZATION OF ITEM AVAILABILITY PROMPTS IN THE CONTEXT OF NON-DETERMINISTIC INVENTORY DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12579513
DYNAMIC PRODUCTION BILL OF MATERIALS SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572942
Intelligent Management of Authorization Requests
2y 5m to grant Granted Mar 10, 2026
Patent 12572918
COMMODITY REGISTRATION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
84%
With Interview (+40.3%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month