Prosecution Insights
Last updated: April 19, 2026
Application No. 17/643,286

PLANNING SYSTEM AND METHOD FOR PROCESSING WORKPIECES

Non-Final OA §103§112
Filed
Dec 08, 2021
Examiner
HOCKER, JOHN PAUL
Art Unit
2189
Tech Center
2100 — Computer Architecture & Software
Assignee
Aurora Flight Sciences Corporation
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
87%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
84 granted / 146 resolved
+2.5% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
16 currently pending
Career history
162
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
36.3%
-3.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 146 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 18, 2025 has been entered. Status of Claims Claims 1, 10 and 11 are amended (by the submission dated 09 November 2025). Claims 1-20 are pending. Claims 1-20 are rejected (Non-Final Rejection). Response to Amendments/Arguments Applicant’s amendments and arguments/remarks referred to below were filed on 09 November 2025. Applicant’s amendments to Para. [0089] of the specification and claims 1, 10 and 11 obviate the previous respective specification and claim objections. For these reasons, the previous specification and claim objections have been withdrawn. Applicant’s amendments to claims 1, 10 and 11 do not fully resolve the prior 35 U.S.C. § 112(a) rejections because the claim amendments both remove “new matter” and add different “new matter”. See detailed prior 35 U.S.C. § 112(a) rejections below. Regarding 35 U.S.C. § 103, Applicant’s arguments filed 09 November 2025 with respect to the rejections under 35 U.S.C. § 103 have been fully considered and are not persuasive as to all of the prior art rejections. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Specifically, Applicant, at Page 21 of the Remarks, argues “Varney is not understood to disclose outputting a detailed log of the durations and order of the process histories of each machine or agent in the CDN” (emphasis added). However, the pertinent claim language recites “recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order”. As shown in the Section 103 rejections below, VARNEY discloses recording, for each state machine, a state transition log comprising an ordered sequence (state changes may be logged as events, Para. [0236] of VARNEY; See also state changes at a local agent that are applied by Autognome (S0) are logged as events, Para. [1569]; See also state machine defines a list of states with commands that Autognome (S0) can issue to move the service from one state to another, Para. [0204]; [moving/changing from one state/event to another is interpreted as a transition]) and BHATTACHARYA teaches recording, for each state machine, a duration of timed actions performed by that state machine (FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA; See also a user may select one output of interest (duration), Para. [0374] of BHATTACHARYA). Claim Rejections - 35 U.S.C. § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-20 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. Claim 1 has been amended to recite that (A) “recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order” (emphasis added) and (B) “outputting … a simulated completion time for the simulation and the state transition log …, enabling the user to evaluate resource utilization” (emphasis added). Regarding limitation (A), Claim 1 is being interpreted to require that the “state transition log compris[es] … duration of timed actions”. Applicant indicates that support for this limitation is allegedly provided at Paras. [0046], [0047], [0070] & [0071] of the as-filed specification and specifically, in a piecemeal fashion, Applicant indicates: the “recording … a state transition log” is supported at Para. [0071] of the specification and “ordered sequence and duration of timed actions” is supported at Paras. [0046], [0047] & [0070] of the specification. However, as explained below, these Paras. [0046], [0047], [0070] & [0071] do not show a log comprising durations (e.g., an amount of time required). Paras. [0046], [0047] & [0070] of specification do not mention “record” or “log” (and hence do not support the claim amendment). Instead, Para. [0046] of specification indicates the duration of timed actions is/are defined in configuration files. Similarly, Para. [0047] of specification indicates amount of time required to perform each timed action is/are described in worker files. Para. [0070] of the specification discusses priority and importance as they relate to the order of performing timed actions (but also does not address “recording” or “log”). Para. [0071] of the specification recites “the simulation manager 220 outputs a state transition log of all of the state transitions recorded for each state machine 256 during the processing of the workpieces 452 in the workpiece order and the simulated completion time.” Similarly, Para. [0096] of specification recites “method additionally includes step 506 of outputting a simulated completion time for processing the workpiece order, and a state transition log of the state transitions recorded for each state machine 256 during processing of the workpiece order” (emphasis added). Para. [0071] of the specification recites “[t]he simulation includes … each time one of the state machines 256 performs a timed action … recording the state transitions associated with the timed action” (emphasis added) and “[t]he simulation includes repeating the steps … until all of the workpieces 452 have been processed … After performing the above-noted steps, the simulation manager 220 outputs a simulated completion time for processing the workpiece order … [i]n addition, the simulation manager outputs a state transition log of all of the state transitions recorded …” (emphasis added). That is, Para. [0071] of the specification discloses a “simulated completion time for processing the workpiece order”, which could be a “duration”, but Para. [0071] of specification clearly indicates the outputting of the simulated completion time is after the repeated recording of the state transitions. Thus, the simulated completion time discussed in the specification, even if it is a “duration”, is not tied to a single state transition but rather is tied to processing the workpieces via a specific order. Viewing Applicant’s support Paras. [0046], [0047], [0070] & [0071], of the as-filed specification (as discussed above), the original disclosure does not support recording a duration (the completion time) as a part of the log. In contrast, at least Paras. [0075] & [0091] of the as-filed specification appear to indicate the outputting of the simulated completion time is separate from the log. The log could conceivably include a timestamp of each state transition (e.g., similar to Item 150 of FIG. 20A) but Examiner is unpersuaded this new “log includes durations”-interpreted language is supported by the original disclosure. Regarding limitation (B) {“outputting … a simulated completion time for the simulation and the state transition log …, enabling the user to evaluate resource utilization” (emphasis added)}, Applicant cites to Paras. [0071] & [0091] of the specification as allegedly supporting the amendment. However, Para. [0071] does not mention “utilization” and Para. [0091] indicates “an integer value representing utilization” may be displayed but this utilization value is defined in terms of predicted total amount of time working minus actual time spent working. Further, Para. [0091] indicates efficiency is “calculated as the predicted total working time minus the actual total working time of the state machine.” Thus, it is not clear that the specification supports determining utilization based on the simulated completion time and log. That is, the specification does not seem to support “outputting … a simulated completion time for the simulation and the state transition log …, enabling the user to evaluate resource utilization” (emphasis added). Accordingly, Applicant has not particularly pointed out where both of the newly added claim limitations originate from in the original specification. Accordingly, claim 1 is rejected for failing to comply with the written description requirement. Claims 10 and 11 have substantially similar limitations as recited in claim 1; therefore, they are rejected under 35 U.S.C. 112(a) for the same reasons. include similar. Claims 2-9 and 12-20 depend respectively from one or more of rejected claims 1, 10 and 11. Therefore, claims 2-9 and 12-20 are also rejected under the same rationale since these claims inherit the respective deficiencies of claims 1, 10 and 11. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-20 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. Claim 1 recites the limitation of (A) “recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order” (emphasis added) and (B) “outputting … a simulated completion time for the simulation and the state transition log …, enabling the user to evaluate resource utilization” (emphasis added). However, it is not clear whether the “duration of timed actions … during simulated processing of the workpiece order” is the same as “a simulated completion time” (because Applicant includes Para. [0071] in the support paragraphs). Although a “simulated completion time” could correspond to a duration, Applicant is using both “duration … simulated” and “simulated completion time” and so it is not clear whether these are the same items. Accordingly, claim 1 is rejected under 35 U.S.C. § 112(b) for indefiniteness. Claims 10 and 11 have substantially similar limitations as recited in claim 1; therefore, they are rejected under 35 U.S.C. 112(a) for the same reasons. include similar. Claims 2-9 and 12-20 depend respectively from one or more of rejected claims 1, 10 and 11. Therefore, claims 2-9 and 12-20 are also rejected under the same rationale since these claims inherit the respective deficiencies of claims 1, 10 and 11. Claim Rejections - 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9-18 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over CHAU et al. (U.S. Patent Application Publication No. 2022/0171373 A1) in view of VARNEY (U.S. Patent Application Publication No. 2014/0173135 A1), and further in view of BHATTACHARYA (U.S. Patent Application Publication No. 2021/0241859 A1). Regarding claim 1, CHAU discloses a production utilization planner (PUP) core (model for scheduling [planning] to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU; See also FIG. 16 shows an example of a tool comprising a plurality of processing modules (e.g., electroplating cells), Para. [0074] of CHAU) for a manufacturing cell (semiconductor manufacturers use one or more substrate processing tools to perform deposition, etching, cleaning, and/or other substrate treatments during fabrication of semiconductor wafers, Para. [0004] of CHAU; See also system for processing semiconductor substrates in a tool comprising a plurality of processing chambers configured to process the semiconductor substrates according to a recipe, Para. [0006] of CHAU; [processing chambers for processing semiconductor substrates are interpreted as manufacturing cells]), including a processor and a memory storing instructions that, when executed by the processor, cause the PUP core to (system … comprises a processor and memory storing instructions for execution by the processor, Para. [0006] of CHAU) perform as: a simulation manager simulating the processing of workpieces arranged in a workpiece order by performing the following steps (instructions are configured to simulate … a plurality of processing scenarios and scheduling parameters for the plurality of processing scenarios for processing the semiconductor substrates in the plurality of processing chambers according to the recipe, Para. [0006] of CHAU; See also simulator 1404 simulates the tool configuration and simulates the processing of the wafers [workpieces] in the tool, Para. [0196] of CHAU; [wafers are interpreted as workpieces in this context]; [Examiner’s Note: the simulation “by performing the following steps” is disclosed when “the following steps” are disclosed]): creating, at initiation of a simulation, an instance of a simulation controller (simulator 1404 may be implemented using a computing device such as a computer … storing one or more computer programs that simulate the operating and processing environment of a tool (e.g., the tool 1406) on the computer … the computer programs additionally comprise instructions for generating, training, and validating the neural networks 1410 and the scheduler level neural network 1412 of the model 1402 on the simulator 1404 as explained below with reference to FIGS. 15A and 15B, Para. [0191] of CHAU; [Examiner’s Note: Applicant’s claim limitation of “creating … an instance of a simulation controller” is interpreted as creating a software instance that controls simulation [and not a piece of hardware]) and an instance of a software model of the manufacturing cell (a model for scheduler pacing is built using nested neural networks or other machine learning algorithms … the model is initially built, trained, and tested offsite using simulation, Para. [0181] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift … the onsite training also adjusts the model for any recipe changes and/or tool hardware changes, Para. [0181] of CHAU; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; [because the model is tool-specific, and the tool’s system software includes a controller, the creation/building of the model is interpreted to also create an instance of a simulation tool controller]) having state machines (the model 1204 includes a deep neural network that is trained using a reinforcement learning method as explained in further detail with reference to FIG. 13, reinforcement learning involves an agent, a set of states S and a set A of actions per state, and by performing an action ‘a’ from the set A, the agent transitions from state to state, Para. [0165] of CHAU; See also the reinforcement learning method used by the model 1204 can include Q-learning … Q-learning finds an optimal policy for any finite Markov decision process (FMDP), Para. [0166]; [a Markov decision process is interpreted as defining a set of states and transition between them [i.e., a state machine]]) configured to perform timed actions on the workpieces (the model is trained using data collected from preventive maintenance operations (PMs), recipe times, and wafer-less auto clean (WAC) times as inputs to the model, Para. [0081] of CHAU; [wafers and/or substrates are interpreted as the workpieces in this semiconductor/etching context]; See also wafer wait times and process time recipe, Para. [0080] of CHAU; See also one neural network is used per robot to predict the transfer times for each robot, Para. [0179]; See also wait time is an amount of time wafers have to wait after processing of the wafers is completed in a processing module until the processing of the wafers can begin in a next processing module, Para. [0188] of CHAU; See also the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time, Para. [0146] of CHAU; [Examiner’s Note: etching/processing/machining time, transfer/transit time, wait time, clean time and idle time are the same and/or similar to the states/processes/timed actions discussed in Applicant’s specification at Para. [0067], which recites “AGV 420 has the states of ‘idle,’ ‘charging,’ ‘transiting/unload,’ ‘waiting/pickup,’ ‘picking up,’ ‘transiting/loaded,’ ‘waiting/drop off,’ and ‘dropping off.’) and Para. [0093], which recites “[e]xamples of timed actions performed by workers 258 include machining a workpiece 452 via a robotic device 262 in the machining subcell 402, cleaning a workpiece 452”), and each state machine has a state during the timed actions (as discussed above, Paras. [0080], [0081], [0146], [0179] and [0188] of CHAU disclose preventive maintenance operations (PMs), clean times, idle times, wait times, process time recipe and recipe times, transfer times), and a state transition from state to state (agent transitions from state to state, Para. [0165] of CHAU); determining, via the simulation controller, a next timed action to be performed by the state machines (instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); incrementing the simulation to the next timed action (the instructions are configured to further train the model incrementally based on data generated during the processing of the semiconductor substrates and the additional semiconductor substrates in the semiconductor processing tool, Para. [0056] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift, Para. [0181] of CHAU; Regarding “next state”, see also instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); updating the software model and the simulation controller each time a state machine performs a timed action (the training of the model incrementally discussed above in relation to Paras. [0056] and [0181] of CHAU corresponds to updating the model; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; See also third phase includes online real-time and unsupervised learning … Continuous (i.e., ongoing) training is needed since process recipes and/or hardware can change … When such changes occur, the model needs to adapt to the changes, which can be accomplished by continuous training, Para. [0186] of CHAU); repeating the steps of determining the next timed action, incrementing the simulation, updating the software model and the simulation controller, until all of the workpieces have been processed (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170]; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, and the determining, incrementing and updating have been mapped above]; Regarding “until all of the workpieces have been processed”, see total processing time for all the wafers, Para. [0128] of CHAU; [all of the wafers is interpreted to correspond to all of the workpieces, and total processing time for all of the wafers/workpieces is interpreted to correspond to an indication that all of the wafers have been processed]); outputting, for review by a user, a simulated completion time for the simulation (output predictions for program execution times for the processing modules (e.g., processing modules 1602 shown in FIG. 16) and predictions for the robot transfer times (e.g., for robots 1610 and 1614 shown in FIG. 16), Para. [0189] of CHAU; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU) … during processing of the workpiece order (the nested neural network based model is initially designed and trained offline using simulated data and then trained online using real tool data for predicting wafer routing path and scheduling, Para. [0172] of CHAU; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU) enabling the user to evaluate resource utilization and determine adjustments to at least one of the following: a physical layout of workers in the manufacturing cell, a worker schedule, and/or a worker behavior, to thereby increase worker efficiency (smart scheduler ensures manufacturing efficiency can be greater than 97% … smart scheduler can optimize the scheduling parameter values by taking into account preventive maintenance that may have to be skipped or delayed to meet manufacturing deadlines, Para. [0088]; See also a model is developed … for predicting wafer [workpiece] routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput]; [Examiner’s note: “worker” is interpreted as human or robotic based on applicant’s specification, at Para. [0034]]; See also monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU). Although CHAU teaches outputting state transitions during processing of the workpiece order (Para. [0089], [0169] & [0172] of CHAU) and logs (Paras. [0102], [0112] & [0164]), and an event (from the event logs of CHAU) may be considered a trigger for a state transition, it is arguable that CHAU does not appear to explicitly verbatim disclose recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order; repeating the recording of the state transition log for each state machine, and outputting, for review by a user, a state transition log for each state machine during processing of the workpiece order. VARNEY, however, is in the same field of software that interacts with configuration and state information (Para. [0166] of VARNEY) and teaches recording, for each state machine, a state transition log comprising an ordered sequence (state changes may be logged as events, Para. [0236] of VARNEY; See also state changes at a local agent that are applied by Autognome (S0) are logged as events, Para. [1569] of VARNEY; See also state machine defines a list of states with commands that Autognome (S0) can issue to move the service from one state to another, Para. [0204] of VARNEY; [moving/changing from one state/event to another is interpreted as a transition]); repeating the recording of the state transition log for each state machine (state changes may be logged as events … event streams can be reduced in the usual fashion to get global, real-time feedback on the changes taking place in the network, Para. [0236] of VARNEY; [event streams are interpreted as plural, i.e., more than one stream is interpreted as repeating the event stream/log]), and outputting, for review by a user, a state transition log for each state machine (state changes may be logged as events … event streams can be reduced in the usual fashion to get global, real-time feedback on the changes taking place in the network, Para. [0236] of VARNEY; [feedback provided is interpreted as outputted]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the event log of CHAU to include recording and/or outputting of logs of changes/transitions of states/state events as in VARNEY for the purpose of obtaining global, real-time feedback on the changes taking place (Para. [01569] of VARNEY). In addition, Para. [0127] of CHAU recommends further training the model based on data gathered from “other scenarios” and “other tools”, which would motivate a person having ordinary skill to consider other scenarios, such as VARNEY. In addition, BHATTACHARYA is in the same field of Design optimisation, verification or simulation (CPC class G06F30/20) and teaches recording, for each state machine, a duration of timed actions performed by that state machine (FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA; See also a user may select one output of interest (duration), Para. [0374] of BHATTACHARYA). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the stochastic/random-based batch simulation method of CHAU as modified to record and/or output durations as in BHATTACHARYA for the purpose of allowing a user to evaluate simulated designs, and identify, based on user interactions with the interface, user preferences for designs, preferences for design parameters, optimality of designs, and the like (Para. [0339] of BHATTACHARYA). Regarding claim 2, CHAU as modified discloses the PUP core of Claim 1, wherein the instructions, when executed by the processor, cause the PUP core to perform as a batch analysis tool (the trained model uses reinforced learning to handle batch/multiple substrates in various processing scenarios, Paras. [0080]-[0081] of CHAU) performing a batch analysis on a quantity of iterations of the workpiece order (batch (multiple substrates) processing tools used for multiple parallel material deposition processes with restrictions on wafer wait times, pacing a scheduler of a tool to achieve best throughput and least wafer wait time, Para. [0080] of CHAU; See also to improve the accuracy of scheduler pacing used in tools for multiple parallel material deposition (e.g., multi-layer plating) processes, the present disclosure proposes a machine learning method based on nested neural networks for accurately predicting scheduler pacing for different processes, Para. [0089] of CHAU) to simulate, by performing the following: randomizing the workpiece order by arranging the workpieces into a different ordering than previously simulated (Q-learning can handle problems with stochastic transitions and rewards without requiring adaptations … Q-learning finds an optimal policy for any finite Markov decision process (FMDP) … Q-learning maximizes the expected value of the total reward over all successive steps, starting from the current state, Para. [0166] of CHAU; [Examiner’s note: stochastic means randomly determined and a Markov process is a stochastic/random process]; See also simulator is used to simulate, using realistic transfer times in actual tools, various scheduling scenarios and wafer routing paths that may be feasible in real tools … the simulator performs these simulations based on hardware configurations of different tools and based on various processes that can be used in the tools for processing wafers, Para. [0183] of CHAU); performing, using the simulation manager, a simulation of the randomized workpiece order (Q-learning can handle problems with stochastic transitions and rewards without requiring adaptations … Q-learning finds an optimal policy for any finite Markov decision process (FMDP) … Q-learning maximizes the expected value of the total reward over all successive steps, starting from the current state, Para. [0166] of CHAU; [Examiner’s note: stochastic means randomly determined and a Markov process is a stochastic/random process]; See also simulator is used to simulate, using realistic transfer times in actual tools, various scheduling scenarios and wafer routing paths that may be feasible in real tools … the simulator performs these simulations based on hardware configurations of different tools and based on various processes that can be used in the tools for processing wafers, Para. [0183] of CHAU); repeating the steps of randomizing the workpiece order, and performing a simulation of the randomized workpiece order, until all of the iterations have been completed (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170] of CHAU; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, and the repeatable operations (randomizing order, and performing the simulation have been mapped above]; Regarding “until all of the iterations have been completed”, see total processing time for all the wafers, Para. [0128] of CHAU; [total processing time for all of the wafers is interpreted to correspond to all of the iterations have been completed/processed]); and outputting a batch analysis list of the simulated completion time for each randomized workpiece order (success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU; See also the trained model uses reinforced learning to handle batch/multiple substrates in various processing scenarios, Paras. [0080]-[0081] of CHAU; See also the outputs predicted by the model are checked against the training data, and at 708, the model parameters and/or network technology are adjusted to produce better matching between the model's predictions and the actual data, and at 710, whether the model meets predetermined criteria is determined, Para. [0145] of CHAU; See also next sentence: for example, the predetermined criteria include determining whether the model can compensate for tool-to-tool variations and for same-tool performance drift, and whether the model can optimize for unavailable PMs, and the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time (e.g., less than 2%) and high manufacturing efficiency (e.g., greater than 97%), Para. [0146] of CHAU; [the predicted outputs of the model which match with a criteria based on total processing time are interpreted as corresponding to a batch list of simulated completion/processing times]; See also Q-learning can handle problems with stochastic transitions and rewards without requiring adaptations … Q-learning finds an optimal policy for any finite Markov decision process (FMDP) … Q-learning maximizes the expected value of the total reward over all successive steps, starting from the current state, Para. [0166] of CHAU; [Examiner’s note: stochastic means randomly determined and a Markov process is a stochastic/random process]; See also simulator is used to simulate, using realistic transfer times in actual tools, various scheduling scenarios and wafer routing paths that may be feasible in real tools … the simulator performs these simulations based on hardware configurations of different tools and based on various processes that can be used in the tools for processing wafers, Para. [0183] of CHAU; See also FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA). Regarding claim 3, CHAU as modified discloses the PUP core of Claim 2, wherein the instructions, when executed by the processor, cause the PUP core to perform as an uncertainty analysis tool performing an uncertainty analysis on a plurality of randomized workpiece orders selected from the batch analysis list, by performing the following ([Examiner’s Note: the uncertainty analysis “by performing the following” is disclosed when “the following [steps]” are disclosed]; See also Q-learning can handle problems with stochastic transitions and rewards without requiring adaptations … Q-learning finds an optimal policy for any finite Markov decision process (FMDP) … Q-learning maximizes the expected value of the total reward over all successive steps, starting from the current state, Para. [0166] of CHAU; [Examiner’s note: stochastic means randomly determined and a Markov process is a stochastic/random process]): changing a value of at least one timed action of at least one of the state machines (the instructions are configured to adjust the model for any changes to the recipe, the semiconductor processing tool, or both, Para. [0057] of CHAU; See also the predetermined criteria include determining whether the model can compensate for tool-to-tool variations and for same-tool performance drift, and whether the model can optimize for unavailable PMs, Para. [0146] of CHAU; [variations and performance drift are interpreted as changes to the tool’s performance, i.e., changes to the time of the processes/timed actions]; See also next sentence: the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time (e.g., less than 2%) and high manufacturing efficiency (e.g., greater than 97%), Para. [0146] of CHAU; See also next paragraph: FIG. 8 shows a method 800 for validating the model in further detail … the total dataset is divided into one final test set and N other subsets, where N is an integer greater than one … each model is trained on all but one of the subsets to get N different estimates of the validation error rate, Para. [0148] of CHAU); performing, using the simulation manager, a simulation of one of the workpiece orders using the changed timed action (the instructions are configured to adjust the model for any changes to the recipe, the semiconductor processing tool, or both, Para. [0057] of CHAU; See also simulating the plurality of processing scenarios includes data generated based on a configuration of the tool, wafer-flow types, run scenarios, recipe times, and wafer-less auto clean times obtained from the tool, Para. [0043] of CHAU; See also additional training data are generated using simulations to cover various processing scenarios used by the semiconductor manufacturers using the tools, Para. [0082] of CHAU); updating the simulated completion time for the workpiece order as a result of the changed timed action (the instructions are configured to adjust the model for any changes to the recipe, the semiconductor processing tool, or both, Para. [0057] of CHAU; See also self-exploration process uses the discrete event simulator to automate efforts to find the best possible way to operate a system (e.g., to find the best path in which to move a wafer through a tool) at optimum throughput performance, Para. [0086] of CHAU; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128] of CHAU; See also the nested neural network based model is initially designed and trained offline using simulated data and then trained online using real tool data for predicting wafer routing path and scheduling, Para. [0172] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU); repeating, for every workpiece order, the steps of changing the value of at least one timed action, performing the simulation of the workpiece order, and updating the simulated completion time (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170] of CHAU; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, and the repeatable operations (changing the value, performing the simulation and updating the simulated completion time have been mapped above]); and identifying, from among the workpiece orders subjected to the uncertainty analysis, the workpiece order that has the shortest simulated completion time (using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU; [the fastest throughput is interpreted to correspond to the shortest simulated completion time]; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, and whether a manufacturing efficiency (actual/theoretical cycle time) can be high (e.g., greater than 97%) for each recipe, Para. [0128] of CHAU; See also above discussion regarding simulation/simulator of CHAU involving the model). Regarding claim 4, CHAU as modified discloses the PUP core of Claim 3, wherein the instructions, when executed by the processor, cause the PUP core to perform as a controller analysis tool, coupled to a plurality of simulation controllers (processing chamber controllers 130 associated with the processing chambers 104 generally follow a recipe that specifies the timing of steps, process gases to be supplied, temperature, pressure, RF power, and so on, Para. [0094] of CHAU), each of the simulation controllers having a different set of rules for determining the order in which the timed actions are performed on the workpieces (a recipe defines sequencing, operating temperatures, pressures, gas chemistry, plasma usage, parallel modules, periods for each operation or sub-operation, substrate routing path, and/or other parameters, and the substrates may be transferred between two or more processing chambers in a particular sequence to undergo different treatments, Para. [0005] of CHAU; See also recipes with more processing layers can have longer processing times, Para. [0175] of CHAU; [Paras. [0005] & [0175] of CHAU indicate that there are recipes with different processing layers, and the recipes defines the sequences/order; [hence it is interpreted that the different recipes with different processing layers have different processing sequences/orders]), wherein the controller analysis tool evaluates the effect of each one of the simulation controllers on the completion time for processing the workpieces, by performing the following ([Examiner’s Note: the evaluation “by performing the following” is disclosed when “the following [steps]” are disclosed]; See also each model is trained on one partition and is evaluated on the remaining partitions ... validation scores are assigned for each evaluation, Para. [0147] of CHAU; See also a recipe defines sequencing, operating temperatures, pressures, gas chemistry, plasma usage, parallel modules, periods for each operation or sub-operation, substrate routing path, and/or other parameters, and the substrates may be transferred between two or more processing chambers in a particular sequence to undergo different treatments, Para. [0005] of CHAU; See also recipes with more processing layers can have longer processing times, Para. [0175] of CHAU; [Paras. [0005] & [0175] of CHAU indicate that there are recipes with different processing layers, and the recipes defines the sequences/order; [hence it is interpreted that the different recipes with different processing layers have different processing sequences/orders]): performing a batch analysis for simulating a plurality of workpiece orders (batch (multiple substrates) processing tools used for multiple parallel material deposition processes with restrictions on wafer wait times, pacing a scheduler of a tool to achieve best throughput and least wafer wait time, Para. [0080] of CHAU; See also to improve the accuracy of scheduler pacing used in tools for multiple parallel material deposition (e.g., multi-layer plating) processes, the present disclosure proposes a machine learning method based on nested neural networks for accurately predicting scheduler pacing for different processes, Para. [0089] of CHAU), using one of the simulation controllers previously unused in a simulation ([Examiner’s Note: Claim 4 and 15’s claim limitation of “one of the simulation controllers previously unused in a simulation” is interpreted to be any simulation controller because “a simulation” could be any simulation, such “a simulation” scheduled in the future, or “a simulation” related to driving a vehicle. “A simulation” is a broad term that could be any simulation]); saving, for each workpiece order simulated via the batch analysis, the simulated completion time using the simulation controller (FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA; See also a user may select one output of interest (duration), Para. [0374] of BHATTACHARYA); determining, for the simulation controller, the workpiece order that has a shorter simulated completion time than 90 percent of all of the workpiece orders simulated using the simulation controller (a design may be globally optimum if the design is optimal with respect to possible design options for one or more criteria. In embodiments, a design may be globally optimum if the design is optimal with respect to a large percentage (such as 80% or more) of possible design options for one or more criteria, Para. [0148] of BHATTACHARYA; [80% or more is interpreted as including 90 percent]; See also concentrating recommendations and design analysis on designs on or near the convex hull greatly reduces the number of designs that need to be examined … in some cases only one or two percent of the total simulated designs need to be considered when initial design recommendations provided by the platform are on or near the convex hull, Para. [0755] of BHATTACHARYA; See also designs may minimize duration: trial designs that maximize or minimize other design goals, such as the probability of success (POS), discounted cost, and study duration, Para. [0347] of BHATTACHARYA); repeating, for each simulation controller until all simulation controllers have results, the steps of performing the batch analysis, saving the simulated completion time, and determining the workpiece order that has the shorter simulated completion time (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170] of CHAU; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, which could include the modified functions of performing the batch analysis, saving the simulated completion time, and determining the workpiece order that has the shorter simulated completion time); performing, for each simulation controller, the uncertainty analysis on each workpiece order that has the shorter simulated completion time (Q-learning can handle problems with stochastic transitions and rewards without requiring adaptations … Q-learning finds an optimal policy for any finite Markov decision process (FMDP) … Q-learning maximizes the expected value of the total reward over all successive steps, starting from the current state, Para. [0166] of CHAU; [Examiner’s note: stochastic means randomly determined and a Markov process is a stochastic/random process]; See also the instructions are configured to adjust the model for any changes to the recipe, the semiconductor processing tool, or both, Para. [0057]; See also the predetermined criteria include determining whether the model can compensate for tool-to-tool variations and for same-tool performance drift, and whether the model can optimize for unavailable PMs, Para. [0146]; [variations and performance drift are interpreted as changes to the tool’s performance, i.e., changes to the time of the processes/timed actions]; See also next sentence: the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time (e.g., less than 2%) and high manufacturing efficiency (e.g., greater than 97%), Para. [0146]; See also next paragraph: FIG. 8 shows a method 800 for validating the model in further detail … the total dataset is divided into one final test set and N other subsets, where N is an integer greater than one … each model is trained on all but one of the subsets to get N different estimates of the validation error rate, Para. [0148] of CHAU; See also simulating the plurality of processing scenarios includes data generated based on a configuration of the tool, wafer-flow types, run scenarios, recipe times, and wafer-less auto clean times obtained from the tool, Para. [0043] of CHAU; See also additional training data are generated using simulations to cover various processing scenarios used by the semiconductor manufacturers using the tools, Para. [0082] of CHAU; See also self-exploration process uses the discrete event simulator to automate efforts to find the best possible way to operate a system (e.g., to find the best path in which to move a wafer through a tool) at optimum throughput performance, Para. [0086] of CHAU; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128]; See also the nested neural network based model is initially designed and trained offline using simulated data and then trained online using real tool data for predicting wafer routing path and scheduling, Para. [0172] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU); and identifying the simulation controller that results in the shortest simulated completion time (a user may have previously determined the globally optimum design with respect to shortest duration and wish to do so again for the second globally optimum design, Para. [0537] of BHATTACHARYA; [the performance criteria of BHATTACHARYA could be applied as performance criteria in the simulations of CHAU]; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU; [the fastest throughput is interpreted to correspond to a shortest simulated completion time]). Regarding claim 5, CHAU as modified discloses the PUP core of Claim 1, wherein the instructions, when executed by the processor, cause the PUP core to perform as a hardware interface module transmitting the workpiece order from the PUP core to the manufacturing cell (the host computer 1802 is used by an operator to issue commands, provide recipe and so on to the tool 1600, Para. [0212] of CHAU; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156]; See also one or more of the elements 402-408 can be communicatively interconnected by one or more networks, Para. [0121]), and initiating production of the workpiece order upon user command (the host computer 1802 is used by an operator to issue commands, provide recipe and so on to the tool 1600, Para. [0212] of CHAU). Regarding claim 6, CHAU as modified discloses the PUP core of Claim 5, wherein the instructions, when executed by the processor, cause the PUP core to perform as a health monitor module performing the following steps: monitoring real-time performance of the manufacturing cell during processing of the workpiece order (monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU; See also method employs both offline learning using simulation and online learning using real-time tool data, Para. [0180] of CHAU); comparing the real-time performance of the manufacturing cell to predicted performance based on the simulation of the workpiece order in the software model (to determine if one model can cover all possible scenarios or a dedicated model will be needed, the model generator 408 can apply the selected machine learning method to generate a model based on data collected from multiple tool configurations and run scenarios to check if prediction accuracy can meet success criteria …. success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, and whether a manufacturing efficiency (actual/theoretical cycle time) can be high (e.g., greater than 97%) for each recipe, Para. [0128] of CHAU); and detecting at least one of (Examiner’s Note: Claim 6 interpreted to only require detecting one of errors, failures or discrepancies): errors and/or failures of the manufacturing cell (each model is trained on all but one of the subsets to get N different estimates of the validation error rate … the model with the lowest validation error rate is deployed for use, Para. [0148] of CHAU); and discrepancies between the real-time performance of the manufacturing cell and the predicted performance based on the simulation (success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, and whether a manufacturing efficiency (actual/theoretical cycle time) can be high (e.g., greater than 97%) for each recipe, Para. [0128] of CHAU). Regarding claim 7, CHAU as modified discloses the PUP core of Claim 6, wherein the instructions, when executed by the processor, cause the PUP core to perform as the health monitor module performing the following steps: detecting trends in one or more modeled parameters of the state machines based on discrepancies between the real-time performance and a simulated performance (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU); and proposing changes to one or more of the modeled parameters of the software model based on a trend (monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU; See also the further trained model is configured to output a recommendation for a tool configuration in response to receiving recipe information as input, Para. [0052] of CHAU; [the output recommendation of a tool configuration is interpreted to correspond to a proposed tool parameter, which is a part of the trained CHAU model]), to reflect the real-time performance of the manufacturing cell (method employs both offline learning using simulation and online learning using real-time tool data, Para. [0180] of CHAU). Regarding claim 9, CHAU as modified discloses the PUP core of Claim 1, further comprising: a user interface configured to perform at least one of the following ([Examiner’s Note: the user interface configured to perform “at least one of the following” is disclosed when a “user interface” configured to perform “one of the following [steps]” is disclosed]): facilitate user entry of a least one of simulation parameters, worker schedules, and availability dates and completion dates of the workpieces; display upcoming tasks to be performed by workers, including at least technicians or a robotic device; display alerts of potential health issues of the manufacturing cell; display proposed changes to one or more modelled parameters of the software model based on discrepancies between real-time performance and simulated performance of the manufacturing cell; and facilitate user adjustment of one of more of the modelled parameters (remote computer may include a user interface that enables entry or programming of parameters and/or settings, which are then communicated to the system from the remote computer, Para. [0225] of CHAU; See also set of model results can be coded into a user interface to facilitate automatic scheduling parameter selection by the tool operator based on the tool's tool configuration and run scenario selected by the tool operator, Para. [0132] of CHAU; See also lot-based alarms, time-based alarms, Para. [0112] of CHAU; [alerts are interpreted to correspond to alarms]). Regarding claim 10, CHAU discloses a planning system (model for scheduling [planning] to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU) for simulating the processing of workpieces by a manufacturing cell (FIG. 16 shows an example of a tool comprising a plurality of processing modules (e.g., electroplating cells), Para. [0074]; See also semiconductor manufacturers use one or more substrate processing tools to perform deposition, etching, cleaning, and/or other substrate treatments during fabrication of semiconductor wafers, Para. [0004] of CHAU; See also system for processing semiconductor substrates in a tool comprising a plurality of processing chambers configured to process the semiconductor substrates according to a recipe, Para. [0006] of CHAU; [processing chambers for processing semiconductor substrates are interpreted as manufacturing cells]), the planning system comprising: a production utilization planner (PUP) core (model for scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU) having a simulation and analysis module having a processor and a memory (system … comprises a processor and memory storing instructions for execution by the processor, Para. [0006] of CHAU), the memory storing instructions that, when executed by the processor, cause the simulation and analysis module to (system for processing semiconductor substrates in a tool comprising a plurality of processing chambers configured to process the semiconductor substrates according to a recipe, comprises a processor and memory storing instructions for execution by the processor … instructions are configured to simulate, using the second data, a plurality of processing scenarios and scheduling parameters for the plurality of processing scenarios for processing the semiconductor substrates in the plurality of processing chambers according to the recipe, Para. [0006] of CHAU) perform as: a simulation manager configured to simulate the processing of workpieces arranged in a workpiece order, by performing the following steps (simulator 1404 simulates the tool configuration and simulates the processing of the wafers in the tool, Para. [0196]; [wafers are workpieces in this context]; See also regarding arranged in an order/schedule: “predict, using the further trained model, second processing times, second transfer times, and a second route for processing the additional semiconductor substrates in the tool; and a second time to schedule a next set of semiconductor substrates for processing in the tool”, Para. [0048] of CHAU; [Examiner’s Note: the simulation “by performing the following steps” is disclosed when “the following steps” are disclosed]): creating, at initiation of a simulation, an instance of a simulation controller (simulator 1404 may be implemented using a computing device such as a computer comprising one or more hardware processors (e.g., CPUs) and one or more memory devices storing one or more computer programs that simulate the operating and processing environment of a tool (e.g., the tool 1406) on the computer … the computer programs additionally comprise instructions for generating, training, and validating the neural networks 1410 and the scheduler level neural network 1412 of the model 1402 on the simulator 1404 as explained below with reference to FIGS. 15A and 15B, Para. [0191] of CHAU; [Examiner’s Note: Applicant’s claim limitation of “creating … an instance of a simulation controller” is interpreted as creating a software instance that controls simulation [and not a piece of hardware]), and an instance of a software model of the manufacturing cell, the software model having state machines (a model for scheduler pacing is built using nested neural networks or other machine learning algorithms … the model is initially built, trained, and tested offsite using simulation, Para. [0181] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift … the onsite training also adjusts the model for any recipe changes and/or tool hardware changes, Para. [0181] of CHAU; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; [because the model is tool-specific, and the tool’s system software includes a controller, the creation/building of the model is interpreted to also create an instance of a simulation tool controller]) having state machines (the model 1204 includes a deep neural network that is trained using a reinforcement learning method as explained in further detail with reference to FIG. 13, reinforcement learning involves an agent, a set of states S and a set A of actions per state, and by performing an action ‘a’ from the set A, the agent transitions from state to state, Para. [0165] of CHAU; See also the reinforcement learning method used by the model 1204 can include Q-learning … Q-learning finds an optimal policy for any finite Markov decision process (FMDP), Para. [0166]; [a Markov decision process is interpreted as defining a set of states and transition between them [i.e., a state machine]]) configured to perform timed actions on the workpieces (the model is trained using data collected from preventive maintenance operations (PMs), recipe times, and wafer-less auto clean (WAC) times as inputs to the model, Para. [0081] of CHAU; [wafers and/or substrates are interpreted as the workpieces in this semiconductor/etching context]; See also wafer wait times and process time recipe, Para. [0080] of CHAU; See also one neural network is used per robot to predict the transfer times for each robot, Para. [0179]; See also wait time is an amount of time wafers have to wait after processing of the wafers is completed in a processing module until the processing of the wafers can begin in a next processing module, Para. [0188] of CHAU; See also the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time, Para. [0146] of CHAU; [Examiner’s Note: etching/processing/machining time, transfer/transit time, wait time, clean time and idle time are the same and/or similar to the states/processes/timed actions discussed in Applicant’s specification at Para. [0067], which recites “AGV 420 has the states of ‘idle,’ ‘charging,’ ‘transiting/unload,’ ‘waiting/pickup,’ ‘picking up,’ ‘transiting/loaded,’ ‘waiting/drop off,’ and ‘dropping off.’) and Para. [0093], which recites “[e]xamples of timed actions performed by workers 258 include machining a workpiece 452 via a robotic device 262 in the machining subcell 402, cleaning a workpiece 452”]), the state machines comprising workers (see workers being robotic devices as discussed below with relation to Paras. [0179] & [0181] of CHAU), workpiece stations (transfer time for a robot is an amount of time a robot takes to move wafers from point A to point B (e.g., from one processing module to another or from an airlock to a processing module, and from a loading station of the tool to an airlock), Para. [0188] of CHAU), and automated ground vehicles (transport controller 134 control robots 112 and 124, actuators and sensors related to the transportation of substrates to and from the substrate processing tool 100, Para. [0093] of CHAU; See also a front-end robot 1610 transports the substrates 1606 from the FOUP 1608 to a spindle 1612 and then to one of the pre-processing modules 1604, Para. [0207] of CHAU; See also after pre-processing, a backend robot 1614 transports the substrates 1606 from the pre-processing modules 1604 to one or more of the processing modules 1602 for electroplating, Para. [0208] of CHAU; [transport control robots are interpreted as corresponding to automated ground vehicles, where a vehicle is defined as a thing used for transporting goods on land or a means for transport]), the workers comprising technicians and/or robotic devices (the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift … the onsite training also adjusts the model for any recipe changes and/or tool hardware changes, Para. [0181] of CHAU; See also one neural network is used per robot to predict the transfer times for each robot, Para. [0179] of CHAU), and each state machine has a state during the timed actions (as discussed above, Paras. [0080], [0081], [0146], [0179] and [0188] of CHAU disclose preventive maintenance operations (PMs), clean times, idle times, wait times, process time recipe and recipe times, transfer times), and a state transition from state to state (agent transitions from state to state, Para. [0165] of CHAU); determining, via the simulation controller, a next timed action to be performed by the state machines (instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); incrementing the simulation to the next timed action (the instructions are configured to further train the model incrementally based on data generated during the processing of the semiconductor substrates and the additional semiconductor substrates in the semiconductor processing tool, Para. [0056] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift, Para. [0181] of CHAU; Regarding “next state”, see also instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); updating the software model and the simulation controller each time a state machine performs a timed action (the training of the model incrementally discussed above in relation to Paras. [0056] and [0181] of CHAU corresponds to updating the model; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; See also third phase includes online real-time and unsupervised learning … Continuous (i.e., ongoing) training is needed since process recipes and/or hardware can change … When such changes occur, the model needs to adapt to the changes, which can be accomplished by continuous training, Para. [0186] of CHAU); repeating the steps of determining the next timed action, incrementing the simulation, and updating the software model and the simulation controller, until all of the workpieces have been processed (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170]; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, and the determining, incrementing and updating have been mapped above]; Regarding “until all of the workpieces have been processed”, see total processing time for all the wafers, Para. [0128] of CHAU; [all of the wafers is interpreted to correspond to all of the workpieces, and total processing time for all of the wafers/workpieces is interpreted to correspond to an indication that all of the wafers have been processed]); outputting, for review by a user, a simulated completion time for the simulation (output predictions for program execution times for the processing modules (e.g., processing modules 1602 shown in FIG. 16) and predictions for the robot transfer times (e.g., for robots 1610 and 1614 shown in FIG. 16), Para. [0189] of CHAU; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU) … during processing of the workpiece order (the nested neural network based model is initially designed and trained offline using simulated data and then trained online using real tool data for predicting wafer routing path and scheduling, Para. [0172] of CHAU; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU) enabling the user to evaluate resource utilization and determine adjustments to at least one of the following: a physical layout of workers in the manufacturing cell, a worker schedule, and/or a worker behavior, to thereby increase worker efficiency (smart scheduler ensures manufacturing efficiency can be greater than 97% … smart scheduler can optimize the scheduling parameter values by taking into account preventive maintenance that may have to be skipped or delayed to meet manufacturing deadlines, Para. [0088]; See also a model is developed … for predicting wafer [workpiece] routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput]; [Examiner’s note: “worker” is interpreted as human or robotic based on applicant’s specification, at Para. [0034]]; See also monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU). Although CHAU teaches outputting state transitions during processing of the workpiece order (Para. [0089], [0169] & [0172] of CHAU) and logs (Paras. [0102], [0112] & [0164]), and an event (from the event logs of CHAU) may be considered a trigger for a state transition, it is arguable that CHAU does not appear to explicitly verbatim disclose recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order. VARNEY, however, is in the same field of software that interacts with configuration and state information (Para. [0166] of VARNEY) and teaches recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order (state changes may be logged as events … event streams can be reduced in the usual fashion to get global, real-time feedback on the changes taking place in the network, Para. [0236] of VARNEY; See also state changes at a local agent that are applied by Autognome (S0) are logged as events, Para. [1569] of VARNEY; See also state machine defines a list of states with commands that Autognome (S0) can issue to move the service from one state to another, Para. [0204] of VARNEY; [moving/changing from one state/event to another is interpreted as a transition]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the event log of CHAU to include recording a log of changes/transitions of states/state events as in VARNEY for the purpose of obtaining global, real-time feedback on the changes taking place (Para. [01569] of VARNEY). In addition, Para. [0127] of CHAU recommends further training the model based on data gathered from “other scenarios” and “other tools”, which would motivate a person having ordinary skill to consider other scenarios, such as VARNEY. In addition, BHATTACHARYA is in the same field of Design optimisation, verification or simulation (CPC class G06F30/20) and teaches recording, for each state machine, a duration of timed actions performed by that state machine (FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA; See also a user may select one output of interest (duration), Para. [0374] of BHATTACHARYA. It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the stochastic/random-based batch simulation method of CHAU as modified to record and/or output durations as in BHATTACHARYA for the purpose of allowing a user to evaluate simulated designs, and identify, based on user interactions with the interface, user preferences for designs, preferences for design parameters, optimality of designs, and the like (Para. [0339] of BHATTACHARYA). Regarding claim 11, CHAU discloses a method of simulating, via a production utilization planner (PUP) core (model for scheduling [planning] to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU), the processing of workpieces in a workpiece order (instructions are configured to simulate … a plurality of processing scenarios and scheduling parameters for the plurality of processing scenarios for processing the semiconductor substrates in the plurality of processing chambers according to the recipe, Para. [0006] of CHAU; See also simulator 1404 simulates the tool configuration and simulates the processing of the wafers [workpieces] in the tool, Para. [0196] of CHAU; [wafers are interpreted as workpieces in this context]; [Examiner’s Note: the simulation “by performing the following steps” is disclosed when “the following steps” are disclosed]; See also regarding arranged in an order/schedule: “predict, using the further trained model, second processing times, second transfer times, and a second route for processing the additional semiconductor substrates in the tool; and a second time to schedule a next set of semiconductor substrates for processing in the tool”, Para. [0048] of CHAU) by a manufacturing cell (semiconductor manufacturers use one or more substrate processing tools to perform deposition, etching, cleaning, and/or other substrate treatments during fabrication of semiconductor wafers, Para. [0004] of CHAU; See also system for processing semiconductor substrates in a tool comprising a plurality of processing chambers configured to process the semiconductor substrates according to a recipe, Para. [0006] of CHAU; [processing chambers for processing semiconductor substrates are interpreted as manufacturing cells), the method comprising: creating, at initiation of a simulation, an instance of a simulation controller (simulator 1404 may be implemented using a computing device such as a computer … storing one or more computer programs that simulate the operating and processing environment of a tool (e.g., the tool 1406) on the computer … the computer programs additionally comprise instructions for generating, training, and validating the neural networks 1410 and the scheduler level neural network 1412 of the model 1402 on the simulator 1404 as explained below with reference to FIGS. 15A and 15B, Para. [0191] of CHAU; [Examiner’s Note: Applicant’s claim limitation of “creating … an instance of a simulation controller” is interpreted as creating a software instance that controls simulation [and not a piece of hardware]) and an instance of a software model of the manufacturing cell (a model for scheduler pacing is built using nested neural networks or other machine learning algorithms … the model is initially built, trained, and tested offsite using simulation, Para. [0181] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift … the onsite training also adjusts the model for any recipe changes and/or tool hardware changes, Para. [0181] of CHAU; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; [because the model is tool-specific, and the tool’s system software includes a controller, the creation/building of the model is interpreted to also create an instance of a simulation tool controller]) having state machines (the model 1204 includes a deep neural network that is trained using a reinforcement learning method as explained in further detail with reference to FIG. 13, reinforcement learning involves an agent, a set of states S and a set A of actions per state, and by performing an action ‘a’ from the set A, the agent transitions from state to state, Para. [0165] of CHAU; See also the reinforcement learning method used by the model 1204 can include Q-learning … Q-learning finds an optimal policy for any finite Markov decision process (FMDP), Para. [0166]; [a Markov decision process is interpreted as defining a set of states and transition between them [i.e., a state machine]]) configured to perform timed actions on the workpieces (the model is trained using data collected from preventive maintenance operations (PMs), recipe times, and wafer-less auto clean (WAC) times as inputs to the model, Para. [0081] of CHAU; [wafers and/or substrates are interpreted as the workpieces in this semiconductor/etching context]; See also wafer wait times and process time recipe, Para. [0080] of CHAU; See also one neural network is used per robot to predict the transfer times for each robot, Para. [0179]; See also wait time is an amount of time wafers have to wait after processing of the wafers is completed in a processing module until the processing of the wafers can begin in a next processing module, Para. [0188] of CHAU; See also the predetermined criteria may include determining whether the model outputs ensure a small wafer idle time, Para. [0146] of CHAU; [Examiner’s Note: etching/processing/machining time, transfer/transit time, wait time, clean time and idle time are the same and/or similar to the states/processes/timed actions discussed in Applicant’s specification at Para. [0067], which recites “AGV 420 has the states of ‘idle,’ ‘charging,’ ‘transiting/unload,’ ‘waiting/pickup,’ ‘picking up,’ ‘transiting/loaded,’ ‘waiting/drop off,’ and ‘dropping off.’) and Para. [0093], which recites “[e]xamples of timed actions performed by workers 258 include machining a workpiece 452 via a robotic device 262 in the machining subcell 402, cleaning a workpiece 452”]), and each state machine has a state during the timed actions (as discussed above, Paras. [0080], [0081], [0146], [0179] and [0188] of CHAU disclose preventive maintenance operations (PMs), clean times, idle times, wait times, process time recipe and recipe times, transfer times), and a state transition from state to state (agent transitions from state to state, Para. [0165] of CHAU); determining a next timed action to be performed by the state machines (instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); incrementing the simulation to the next timed action (the instructions are configured to further train the model incrementally based on data generated during the processing of the semiconductor substrates and the additional semiconductor substrates in the semiconductor processing tool, Para. [0056] of CHAU; See also the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift, Para. [0181] of CHAU; Regarding “next state”, see also instructions are configured to, for each of the plurality of states, send to the model a current state of the plurality of states and multiple schedulable operations to progress to a next state of the plurality of states, receive from the model a best operation from the multiple schedulable operations selected by the model based on the current state to progress to the next state, and simulate execution of the best operation to simulate progression to the next state, Para. [0027] of CHAU; See also model 1204 uses the memorized best next operation for each state when that particular state occurs in the tool during actual wafer processing, Para. [0170] of CHAU); updating the software model and the simulation controller each time a state machine performs a timed action (the training of the model incrementally discussed above in relation to Paras. [0056] and [0181] of CHAU corresponds to updating the model; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156] of CHAU; See also third phase includes online real-time and unsupervised learning … Continuous (i.e., ongoing) training is needed since process recipes and/or hardware can change … When such changes occur, the model needs to adapt to the changes, which can be accomplished by continuous training, Para. [0186] of CHAU); repeating the steps of determining the next timed action, incrementing the simulation, and updating the software model and the simulation controller, until all of the workpieces have been processed (models generated using machine learning can produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data, Para. [0149] of CHAU; See also discrete event simulator 1202 repeats steps 1304-1312 until the final state is reached, Para. [0170]; See also processing chambers in the substrate processing tools usually repeat the same task on multiple substrates, Para. [0005] of CHAU; [Examiner has cited to citations in CHAU teaching repeating of operations, and the determining, incrementing and updating have been mapped above]; Regarding “until all of the workpieces have been processed”, see total processing time for all the wafers, Para. [0128] of CHAU; [all of the wafers is interpreted to correspond to all of the workpieces, and total processing time for all of the wafers/workpieces is interpreted to correspond to an indication that all of the wafers have been processed]); and outputting, for review by a user, a simulated completion time for the simulation (output predictions for program execution times for the processing modules (e.g., processing modules 1602 shown in FIG. 16) and predictions for the robot transfer times (e.g., for robots 1610 and 1614 shown in FIG. 16), Para. [0189] of CHAU; See also success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, Para. [0128] of CHAU; See also discrete event simulator 1202 can simulate a wafer processing sequence that takes about an hour in less than a minute, Para. [0164] of CHAU) … during processing of the workpiece order (the nested neural network based model is initially designed and trained offline using simulated data and then trained online using real tool data for predicting wafer routing path and scheduling, Para. [0172] of CHAU; See also using the method, a model is developed and trained initially offline using simulation and then online using the actual tool for predicting wafer routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput, Para. [0089] of CHAU) enabling the user to evaluate resource utilization and determine adjustments to at least one of the following: a physical layout of workers in the manufacturing cell, a worker schedule, and/or a worker behavior, to thereby increase worker efficiency (smart scheduler ensures manufacturing efficiency can be greater than 97% … smart scheduler can optimize the scheduling parameter values by taking into account preventive maintenance that may have to be skipped or delayed to meet manufacturing deadlines, Para. [0088]; See also a model is developed … for predicting wafer [workpiece] routing path and scheduling to achieve highest tool/fleet utilization, shortest wait times, and fastest throughput]; [Examiner’s note: “worker” is interpreted as human or robotic based on applicant’s specification, at Para. [0034]]; See also monitor current progress of fabrication operations, examine a history of past fabrication operations, examine trends or performance metrics from a plurality of fabrication operations, to change parameters of current processing, to set processing steps to follow a current processing, or to start a new process, Para. [0224] of CHAU). Although CHAU teaches outputting state transitions during processing of the workpiece order (Para. [0089], [0169] & [0172] of CHAU) and logs (Paras. [0102], [0112] & [0164]), and an event (from the event logs of CHAU) may be considered a trigger for a state transition, it is arguable that CHAU does not appear to explicitly verbatim disclose recording, for each state machine, a state transition log comprising an ordered sequence and duration of timed actions performed by that state machine during simulated processing of the workpiece order. VARNEY, however, is in the same field of software that interacts with configuration and state information (Para. [0166] of VARNEY) and teaches recording, for each state machine, a state transition log comprising an ordered sequence of timed actions performed by that state machine during simulated processing of the workpiece order (state changes may be logged as events … event streams can be reduced in the usual fashion to get global, real-time feedback on the changes taking place in the network, Para. [0236] of VARNEY; See also state changes at a local agent that are applied by Autognome (S0) are logged as events, Para. [1569] of VARNEY; See also state machine defines a list of states with commands that Autognome (S0) can issue to move the service from one state to another, Para. [0204] of VARNEY; [moving/changing from one state/event to another is interpreted as a transition]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the event log of CHAU to include recording a log of changes/transitions of states/state events as in VARNEY for the purpose of obtaining global, real-time feedback on the changes taking place (Para. [01569] of VARNEY). In addition, Para. [0127] of CHAU recommends further training the model based on data gathered from “other scenarios” and “other tools”, which would motivate a person having ordinary skill to consider other scenarios, such as VARNEY. In addition, BHATTACHARYA is in the same field of Design optimisation, verification or simulation (CPC class G06F30/20) and teaches recording, for each state machine, a duration of timed actions performed by that state machine (FIG. 39 of BHATTACHARYA shows outputting/displaying simulation durations (i.e., completion times) for each simulation of a plurality of simulations (each card may be associated with and show data related to a particular trial design of the set of simulated trial designs), Para. [0344] of BHATTACHARYA; See also FIG. 39 of BHATTACHARYA shows four cards elements 3902, 3904, 3906, 3908 with each card showing seven parameter values of different trial designs, Para. [0347]; [Examiner’s Note: One of the parameter values for the simulated trial designs in each of the four cards elements 3902, 3904, 3906, 3908 of FIG. 39 of BHATTACHARYA is “duration” (i.e., completion times)]; See also the initial card selection criteria may be a random criteria wherein random trial designs from the set of simulated trial designs are selected, Para. [0344] of BHATTACHARYA; See also different simulation engines 8512 for use with different design types … for example, trial type X, e.g., a cluster randomized design, may require a different type of engine than trial type Y, e.g., an adaptive randomization design, Para. [0492] of BHATTACHARYA; See also a user may select one output of interest (duration), Para. [0374] of BHATTACHARYA. It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the stochastic/random-based batch simulation method of CHAU as modified to record and/or output durations as in BHATTACHARYA for the purpose of allowing a user to evaluate simulated designs, and identify, based on user interactions with the interface, user preferences for designs, preferences for design parameters, optimality of designs, and the like (Para. [0339] of BHATTACHARYA). Regarding claim 12, CHAU as modified discloses the method of Claim 11, wherein the state machines comprise at least one of a technician and a robotic device (the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift … the onsite training also adjusts the model for any recipe changes and/or tool hardware changes, Para. [0181] of CHAU; See also one neural network is used per robot to predict the transfer times for each robot, Para. [0179] of CHAU). Claim 13 has substantially similar limitations as recited in claim 2, except it depends from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU and BHATTACHARYA, as applied in claim 2. Claim 14 has substantially similar limitations as recited in claim 3, except it depends (indirectly) from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU and BHATTACHARYA, as applied in claim 3. Claims 15 has substantially similar limitations as recited in claim 4, except it depends (indirectly) from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU and BHATTACHARYA, as applied in claim 4. Claim 16 has substantially similar limitations as recited in claim 5, except it depends from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU, as applied in claim 5. Claim 17 has substantially similar limitations as recited in claim 6, except it depends (indirectly) from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU, as applied in claim 6. Claim 18 has substantially similar limitations as recited in claim 7, except it depends (indirectly) from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU, as applied in claim 7. Claim 20 has substantially similar limitations as recited in claim 9, except it depends from parent claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU, as applied in claim 9. Claims 8 and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over CHAU et al. (U.S. Patent Application Publication No. 2022/0171373 A1) in view of VARNEY et al. (U.S. Patent Application Publication No. 2014/0173135 A1) and BHATTACHARYA (U.S. Patent Application Publication No. 2021/0241859 A1), and further in view of HARAMATI et al. (U.S. Patent Application Publication No. 20210157978 A1) and BAHRAMSHAHRY et al. (U.S. Patent Application Publication No. 2020/0026564 A1). Regarding claim 8, CHAU as modified discloses the PUP core of Claim 1, wherein the instructions, when executed by the processor, cause the PUP core to perform as a simulation and analysis module continuously evaluating the status of the simulation prior to simulating all of the workpieces in the workpiece order (the model is continually refined and trained further onsite on the actual tool by incrementally using data streams from the tool to make further adjustments to the model that reflect the tool-specific and recipe-specific robot transfer times and that compensate for any process drift, Para. [0181]; See also the training of the model incrementally discussed above in relation to Paras. [0056] and [0181] of CHAU corresponds to updating the model; See also discrete event simulator 1202 communicates with a tool's system software (e.g., the controller 138 of a tool 100 shown in FIG. 1 that executes the tool's system software) and the reinforcement learning model 1204 (e.g., the model generated by the system 400 shown in FIG. 4), Para. [0156]; See also third phase includes online real-time and unsupervised learning … Continuous (i.e., ongoing) training is needed since process recipes and/or hardware can change … when such changes occur, the model needs to adapt to the changes, which can be accomplished by continuous training, Para. [0186]; See also model generator 408 can apply the selected machine learning method to generate a model based on data collected from multiple tool configurations and run scenarios to check if prediction accuracy can meet success criteria … the success criteria can also include whether wafer idle times are less than a small percentage (e.g., 2%) of total processing time for all the wafers, and whether a manufacturing efficiency (actual/theoretical cycle time) can be high (e.g., greater than 97%) for each recipe, Para. [0128]; [the percentage of wafer idle times of total processing time and actual cycle time divided by theoretical cycle time are interpreted as statistics]), by performing the following after each update of the software model (Examiner’s Note: the continuously evaluating “by performing the following” is disclosed when “the following [steps]” are disclosed]). CHAU as modified appears to fail to explicitly disclose adding the duration of the most recently completed simulated timed action to a running total of the duration of the simulated timed actions performed up to the most recent update of the software model. HARAMATI, however, is in the same field of optimizing workflows/schedules (Para. [0002 of HARAMATI) and teaches adding the duration of the most recently completed timed action to a running total of the duration of the timed actions performed up to the most recent update of the software model (project time tracking may include one or more of measuring, storing, managing, analyzing, prioritizing, recording, allocating, and organizing time, or any other mechanism for capturing of time, Para. [0679] of HARAMATI; See also next sentence: the time may be measured on an individual basis in order to capture the effort, costs, or workload of a particular individual or group of individuals, Para. [0679] of HARAMATI; See also next sentence: the time may also be measured and associated with an individual project to capture the effort, costs, or workload required by a particular project, Para. [0679] of HARAMATI; See also next sentence: additionally, the time may be measured on an individual basis and then aggregated in order to capture the effort, costs, or workload of a group of individuals as required by a particular project or projects, as disclosed herein, Para. [0679] of HARAMATI). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the simulation timed actions/states of CHAU (as modified) to use the time tracking features of HARAMATI for the purpose of capturing the effort, costs, or workload required by a particular project, person and/or group (See Para. [0679 of HARAMATI). Also, CHAU as modified appears to fail to explicitly disclose determining, at that point in the simulation, a statistically-modeled best-case interim time, calculated as a function of a statistically-modeled best-case completion time and the sum of the duration of every timed action required to complete the workpiece order; calculating the difference between the statistically-modeled best-case interim time to the running total of the duration of the simulated timed actions; and terminating the simulation of the workpiece order if the difference is greater than 50 percent of the statistically-modeled best-case interim time. BAHRAMSHAHRY, however, is in the same field of optimizing workload scheduling (Paras. [0004]-[0006] of BAHRAMSHAHRY) and teaches determining, at that point in the simulation, a statistically-modeled best-case interim time, calculated as a function of a statistically-modeled best-case completion time and the sum of the duration of every timed action required to complete the workpiece order (the planner 127 of the scheduler may be utilized to allocate resource for the most efficient utilization or for best performance (e.g., the fastest execution), Para. [0114]; See also while passing tests may take a second each, and thus 1000 seconds total (approximately 16 minutes total), a workload having 1000 failing tests, each of which must wait 30 seconds, results in a total processing time of approximately 8 hours, which will incur a much larger dollar cost or compute resource consumption cost than is anticipated for such a workload, Para. [0627]); calculating the difference between the statistically-modeled best-case interim time to the running total of the duration (the planner 127 of the scheduler may be utilized to allocate resource for the most efficient utilization or for best performance (e.g., the fastest execution), Para. [0114]; See also while passing tests may take a second each, and thus 1000 seconds total (approximately 16 minutes total), a workload having 1000 failing tests, each of which must wait 30 seconds, results in a total processing time of approximately 8 hours, which will incur a much larger dollar cost or compute resource consumption cost than is anticipated for such a workload, Para. [0627]) of the simulated timed actions (iterating through the produce, calculate, select, and plan operations to yield a scheduling plan based on SLTs for the simulated workload tasks and the simulated data representing the additional computing hardware. Such a utility may be utilized to evaluate “what if” scenarios, Para. [0130]; See also next sentence: for instance, to evaluate whether additional computing hardware will sufficiently meet anticipated demand or sufficiently meet actual historical demand, and because the scheduler simply pulls data from the local cache, it is agnostic to the fact that the data in local cache is being provided by a simulator rather than being actual production data, Para. [0130]); and terminating the simulation of the workpiece order if the difference is greater than 50 percent of the statistically-modeled best-case interim time (a watchdog ROI engine 3195, which constantly evaluates all running jobs and if a running job is evaluated by the watchdog and determined to have an ROI below a threshold then the watchdog will issue the termination instructions to terminate the executing and currently running workload, Para. [0623]; See also the scheduling service may be configured to not pick up work for execution on the basis of cost, Para. [0627]; See also next sentence: for example, a workload with 1000's of tests may utilize timeouts, such as 30 seconds, but each test runs very quickly when passing, Para. [0627]; See also next sentence: however, if a bad code submission is received or a bad change list is being processed, then many of the tests or even every test may fail, thus causing every test to wait for its timeout which is much more CPU intensive and costly in terms of time and dollars as the workload must wait for every failing test to reach its timeout, Para. [0627]; See also next sentence: thus, while passing tests may take a second each, and thus 1000 seconds total (approximately 16 minutes total), a workload having 1000 failing tests, each of which must wait 30 seconds, results in a total processing time of approximately 8 hours, which will incur a much larger dollar cost or compute resource consumption cost than is anticipated for such a workload, Para. [0627]; See also next sentence: therefore, the watchdog ROI engine 3195 which is analyzing currently executing workloads may perform its ROI analysis on the workload having the failing tests and affirmatively kill or terminate the execution to save cost and compute resource, Para. [0627]; See also next sentence: thus, where a catastrophic failure is identified by the watchdog ROI engine 3195, such a finding may dictate termination of the workload rather than permitting the workload to execute, Para. [0627]; [8 hours is “greater than 50 percent of” 16 minutes, and 30 seconds is “greater than 50 percent of” a (one) second]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the simulation method of CHAU (as modified) to use the features of the simulation method of BAHRAMSHAHRY for the purpose of efficiently using computing resources (See BAHRAMSHAHRY at Para. [0627]: by cutting the losses short for such a workload it is known already that there is a catastrophic failure and spending dollars to complete the remaining failing tests will not likely yield additional informational data points for the cost incurred, thus negating any potential ROI for the workload). Claim 19 has substantially similar limitations as recited in claim 8, except it depends from parent base claim 11; therefore, it is rejected under 35 U.S.C. § 103 using CHAU, HARAMATI and BAHRAMSHAHRY, as applied in claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: LINDER et al. (US 20180107198 A1) published April 19, 2018. See, e.g., Para. [0130] teaches “Step 904 involves receiving at least one modification to the process via the interactive template”, Para. [0131] teaches “Step 906 involves gathering data related to the performance of at least one step of the process” and Para. [0132] teaches “Step 908 involves converting the gathered data related to the performance of the at least one step of the process into a format providing manufacturing results data … the data gathered in step 906 may be processed into a variety of data points useful for analyzing the performance of the process … for example, the gathered data may be converted into various derivatives, including but not limited to simple averages (i.e., a means(s)), weighted averages, standard deviations, etc.”; See also Para. [0024] teaches “testing the process includes simulating the execution of the process” and Para. [0025] teaches “the server is further configured to identify errors in the created process prior to execution or the runtime configuration”. SAWYER et al. (US 20190278878 A1) published Sept. 12, 2019. See, e.g., Para. [0081] teaches “a simulation may determine runtimes and lead times, parameters depending on either runtimes or lead times, and other factors such as feasibility or availability of manufacturing devices, with the assumption that multiple parts may be simultaneously manufactured, either in entirety or for at least a stage of each part's respective manufacturing procedure; simulation may, for instance, reduce the runtime per part where multiple parts may be manufactured simultaneously.” GRISWOLD et al.; Applicant: The Boeing Company; (US 20170235853 A1) published Aug. 17, 2017. See e.g., Para. [0030] teaches “computer processor 104, generates a simulated flow model 117 for the manufacturing facility based on the initial facility layout concept 114 (and/or the modified facility layout concept 115)” and Para. [0037] teaches “the simulated flow model 117 is satisfactory if the duration of the simulated flow is below a threshold amount of time”. FAMA et al. (US 20190130329 A1) published May 2, 2019. See e.g., Para. [0024] teaches “example list of states includes available, busy, after-call work, and unavailable” and Para. [0098] teaches “to achieve acceptable performance, sampling may be performed, either randomly ordering the Activity Permutations or ordering the Activity Permutations via which Activities/Queues have the shortest service goal”. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P HOCKER whose telephone number is (571)272-0501. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached on (571)272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOHN P. HOCKER Examiner Art Unit 2189 /JOHN P HOCKER/Examiner, Art Unit 2189 /REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189
Read full office action

Prosecution Timeline

Dec 08, 2021
Application Filed
Feb 23, 2025
Non-Final Rejection — §103, §112
May 22, 2025
Applicant Interview (Telephonic)
May 22, 2025
Examiner Interview Summary
May 23, 2025
Response Filed
Aug 23, 2025
Final Rejection — §103, §112
Oct 26, 2025
Response after Non-Final Action
Nov 09, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103, §112
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601250
MONITORING A WELL BARRIER
2y 5m to grant Granted Apr 14, 2026
Patent 12530512
CIRCUIT SIMULATION BASED ON AN RTL COMPONENT IN COMBINATION WITH BEHAVIORAL COMPONENTS
2y 5m to grant Granted Jan 20, 2026
Patent 12505124
METHOD AND SYSTEM FOR CREATING A RULE FOR A BUSINESS FLOW DIAGRAM
2y 5m to grant Granted Dec 23, 2025
Patent 12487797
SMART PROGRAMMING METHOD FOR INTEGRATED CNC-ROBOT
2y 5m to grant Granted Dec 02, 2025
Patent 8515929
Online Propagation of Data Updates
2y 5m to grant Granted Aug 20, 2013
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
87%
With Interview (+29.7%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 146 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month