Prosecution Insights
Last updated: April 19, 2026
Application No. 17/868,262

EVENT PROCESSING AND PREDICTION UPDATING AT A DIGITAL TWIN

Final Rejection §101§102§103
Filed
Jul 19, 2022
Examiner
WHITE, JAY MICHAEL
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solutions Limited
OA Round
2 (Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
1 granted / 8 resolved
-42.5% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This Final Office Action is responsive to the claims filed on January 12, 2026. Claims 1-2, 4-14, and 16-20 are under examination. Claims 1-20 are rejected under 35 USC 101. Claims 1-2, 4-14, and 16-20 are rejected under 35 USC 102 as anticipated by Thiruvenkatanathan. Claims 9 and 18 are rejected under 35 USC 103 as obvious over Thiruvenkatanathan and Straat. Response To Amendments And Arguments 35 USC 101: The Arguments presented for 35 USC 101 have been considered but are not persuasive. The arguments will be addressed in the order presented in the response. The claims are allegedly directed to event processing an and prediction updating at a digital twin which are completely technically in nature and cannot be performed in the mind or with aid of pen and paper: The independent claims do not update a digital twin. The final steps merely update a prediction associated with a digital twin. For example, someone could determine that the power went out and that the digital twin is unlikely to be functioning. The claims and their predictions are tangential to anything having to do with the digital twin. The claims boil down to receiving information about a situation and changing how one operates based on that. This can be done entirely in the mind or with the aid of pen and paper. For example, when a stage director models the blocking for a scene, they often use scale models drawn on paper to determine where an actor will be. They may choose to modify the layout of the furniture (model) based on using their eyes (sensors) to determine that an actor will be blocked. It could also be as simple as determining your likely e.t.a. in a rain storm, which navigation software has done for twenty years. The independent claims are recited at such a high level and in a manner so divorced from the technology of digital twins, that they cannot even be considered technological features without further clarification. Specification paragraphs allegedly illustrate technological features of the claims: The Applicant states in the response that data is transmitted to a digital twin. The independent claims do not state this. The Applicant discusses a knowledge graph, but it is not recited in the claim. Further still, there is nothing describing the actual structure of a data structure but merely the structure represented in the data that does not affect how the data is processed or stored. This has nothing to do with the technical features of the system. It is just received data that is processed as a computer would process any other data. This is not persuasive. Determining that events are associated allegedly cannot be done mentally or with the aid of pen and paper: People have been associating events since the existence of events. For example, people see cumulonimbus clouds (one event) and predict that it will storm (a second event). This is not persuasive. Temporal control allegedly cannot be performed in the mind or with the aid of pen and paper: While it can be difficult for the less mentally disciplined to stop processing something of interest if it is still up in the air and is worth considering later, one is certainly able to write it down in order to remember it later. Citing advantages of the system in the specification that the claims do not reflect: The Applicant touts alleged advantages of a detailed system that is present in the specification and then states that the generic language of the claims covers them. The claims are sufficiently broad to cover those and many other systems. The inquiry of whether the alleged invention provides advantages is only relevant to the claims to the extent the claims recite the features in the specification. As it stands, divorced of generic computing elements performing generic operations that were performed manually prior to the computer age, there is nothing in the claims that cannot be performed mentally or with the aid of pen and paper. “Involves computational adjustment which cannot be performed in the mind or with aid of pen and paper”: The Applicant has used this phrase a number of times to describe actions people have taken mentally or with the aid of pen and paper since before the advent of the computer. Even the advantage of conservation of energy and mental resources for a problem you do not have to worry about until it is relevant is analogous. The mental processes that predate the claimed generic computing elements, such as a processor, memory, and a digital twin, provide the same advantages as the alleged invention. These are longstanding practices and represent well-understood, routine, and conventional activity. The claims allegedly integrate the abstract idea into a practical application: The applicant states that the features of the independent claim address problems associated with accuracy of simulations. However, the Applicant fails to recite that any of the information is used in or conjunction with a simulation. Making a prediction that is in some way associated with a digital twin is not a technological advance. As previously asserted, I could see that there is a scheduled power outage, and I can decide not to fire up the digital twin during that time to avoid inconsistencies. This has nothing to do with the efficiency of computing or with the operation of a digital twin. Filtering events is something that has been performed mentally as long as events have existed. I choose to go to a wedding instead of the supe bowl on a Sunday because I am a family-first person. Receiving one or more sensors and at an interface associated with a digital twin, a first input associated with a first event could merely be seeing that there is a cumulonimbus cloud, predicting that there will be inclement weather and making sure that a backup power generator is operating. Of course, these would have nothing to do with a technological advance in the computing of quantities or in the technology of digital twins. The guidelines allegedly support the Applicant’s assertion that the claim conserves power and processing resources: There is no doubt that eliminating options from consideration reduces the amount of mental energy and mental resources applied to solving a problem mentally or with pen and paper. Students, from kindergartners to law school graduates, have benefited from the process of elimination. “Answers B and C are not even relevant, so I can eliminate them, which I will do before determining whether the correct answer is A or D.” This is so conventional on its face that it should be considered long understood and eliminated from eligibility at the judicial step 1. Also, that the alleged novelty resides entirely in the abstract idea means that the claims cannot hold up to scrutiny at the USPTO’s Step 2A, Prong 2 that requires ACTUAL CLAIMED integration into the practical application. Step 2B is a non-starter because a long Also, the specific features that confer the alleged benefits according to the Applicant’s specification are not recited in the claims. The claim allegedly improves technology or a technical field: As demonstrated, the claim is so broad that it does not integrate anything into anything. It uses generic computing elements to determine something that is in some way associated with a digital twin. It could be in the same building. In the same village. It could be on the same planet. The methods stated in the independent claims represent longstanding practices that existed before the advent of computers, and the computers are only invoked to conduct operations that were once conducted mentally or with pen and paper. The claims are not integrated into a practical solution that improves computing, and they are not even remotely integrated in a way that improves digital twins. The claims do what we all do with common sense. This argument is not persuasive. Conclusion: The Applicant has failed to demonstrate that the additional limitations of the claims integrate the abstract ideas into a practical application and/or that the additional limitations combine with the other elements of the claim to provide significantly more than the abstract idea, such that the combination would confer an inventive concept. Accordingly, the 35 USC 101 rejections are maintained. 35 USC 102: The Applicant’s arguments and amendments have been considered, but they are not persuasive. The claim language is sufficiently broad to encompass several different meanings/scenarios that are taught in the THIR reference. The Applicant’s arguments will be addressed in the order presented in the response. THIR allegedly fails to teach “refrain from processing the input for a period of time”: The Applicant first asserts that the THIR reference fails to teach “refrain from processing the input for a period of time.” However, when a computer is not processing, it is a deliberate action based on programming in the computer. Anytime a computer is not processing, it is refraining from processing. If the computer does anything other than process for a time, it is refraining from doing so. That is, it does not matter what the cause. The lack of processing of this specific item is refraining from processing. The Applicant’s amendments allegedly overcome THIR: The Applicant amended to incorporate features of former and now canceled claims 3 and 15. The rejection presented with respect to those claims is maintained for the same reasons, which the Applicant did not dispute. The Applicant further amended claim 1 to incorporate features of claim 4, to incorporate the refraining step into claim 7 and to further specify that the data structure indicates relationships between different types of events in the hierarchy of event types. The THIR reference teaches that the system refrains from processing in a number of places. It refrains in time series data between samples. It refrains while awaiting user input. It refrains when it comes to retraining the models. The claim language is so broad that it can encompass any of these scenarios. Further, the use of the language “indicating,” “based on,” and associated with leave a great deal of reasonable interpretative leeway for the broadest reasonable interpretation. Indication of time could be historic data with time stamps, as is taught in THIR. Data structure can be interpreted broadly to mean any structure that contains data, which in THIR can include the databases that include the knowledge graphs and the time series data, which would indicate the time of delay for any of the sampling rate, periodicity of model update, or the time when a user selects an action relative to the time stamp of data accessed. These are mere examples. As indicated in the anticipation rejection, based on the broad teachings of the THIR reference, the claims can be associated with and taught by any number of embodiments and components of the THIR reference. Conclusion: For at least these reasons, the rejections are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Subject Matter Eligibility Claims 1-20 are rejected under 35 U.S.C. 101 for being directed to a judicial exception without significantly more. Step 1 Claims 1-6 are processes. Claims 7-20 are machines. Independent Claims Step 2A, Prong 1 Independent claims -----1, 7, and 13 recite a mental process, an abstract idea. Claim 1 Claim 1 recites: […] determining that the first event is associated with one or more probable second events, wherein the determining that the first event is associated with one or more probable second events comprises: […] determining, based on the hierarchy of event types, that one or more probable second events; (Mental Evaluation, Mental Process - Determining that a first event is associated with a probably second event based on a hierarchy of event types is practically performable in the mind or with aid of pen and paper. A person can anticipate that if they see rain that the ground will be wet.) refraining from processing the first input for a period of time, wherein the period of time is indicated by the data structure that indicates relationships between different types of events in the hierarchy of reports; and (Mental Evaluation, Mental Process – Refraining from processing/considering data at a particular time is practically performable in the mind or with aid of pen and paper. For example, a person can decide to hold off on considering what to wear in the rain if the rain might stop.) updating a prediction associated with the [model] using the first input based on expiry of the period of time, or updating a prediction associated with the [model] using second input associated with the one or more probable second events based on receiving the second input. (Mental Evaluation, Mental Process – Modifying a prediction associated with a model based on elapsed time or a likely event is practically performable in the mind or with aid of pen and paper. For example, a person can decide someone is likely to wear a rain coat if it rains long enough, which would be associated with a model that is configured to indicate that the coat is being worn.) Claim 1 recites mental evaluations, mental processes that comprise an abstract idea. Claim 1 recites an abstract idea. Claim 7 Claim 7 recites: […] determine that the first event is associated with probable subsequent events, wherein determining that the new event is associated with the probable subsequent events comprises […] determining, based on the hierarchy of event types, the probable subsequent events; (Mental Evaluation, Mental Process – Determining that an event is likely to lead to another event based on data is practically performable in the mind or with the aid of pen and paper. This is akin to a recipe in which steps must be conducted in order and in their own times. ) refrain from processing the input for a period of time, wherein the period of time is indicated by the data structure that indicates relationships between different types of events in the hierarchy of event types; (Mental Evaluation, Mental Process – Refraining from considering something for a period of time in response to data saying that the consideration need not be done for a time is practically performable in the mind or with the aid of pen and paper. For example, if following a recipe, one waits between steps a specific period of time associated with those particular steps/events.) determine that the event triggers an update for a prediction associated with the [model of the environment/scenario]; (Mental Evaluation, Mental Process – Determining that an event affects a prediction associated with a model is practically performable in the mind or with aid of pen and paper.) select a model, from a plurality of possible models, based on a context associated with a current state of the [environment model] or a context associated with the event; and (Mental Evaluation, Mental Process – Selecting a model to use for evaluation based on available information is practically performable in the mind or with aid of pen and paper. This could be akin to selecting whether to beat eggs manually or use a mixer.) update the prediction associated with the [environment model] based on the selected model and the input. (Mental Evaluation, Mental Process – Modifying a prediction based on a selection of a model and other data is practically performable in the mind or with aid of pen and paper. For example, if on chooses to mix manually, the time it is expected to take will increase and the time in which the food will be done is extended.) Claim 7 recites mental processes and, hence, under MPEP 2106.04(a)(2)(III), an abstract idea. Claim 7 recites an abstract idea. Claim 13 Claim 13 recites: […] determine that the first event is associated with a probable second event, wherein determining that the first event is associated with the probable second event comprises: […] determining, based on the hierarchy of event types, the probable second event; (Mental Evaluation, Mental Process - Determining that a first event is associated with a probably second event is practically performable in the mind or with aid of pen and paper. A person can anticipate that if they see rain that the ground will be wet.) refrain from processing the first input for a period of time, wherein the period of time is indicated by the data structure that indicates relationships between different types of events in the hierarchy of event types; (Mental Evaluation, Mental Process – Refraining from processing/considering data at a particular time is practically performable in the mind or with aid of pen and paper. For example, a person can decide to hold off on considering what to wear in the rain if the rain might stop.) […] select a model, from a plurality of possible models, based on a context associated with a current state of the [environmental mode] or a context associated with the probable second event; and (Mental Evaluation, Mental Process – Selecting a model to use for evaluation based on available information is practically performable in the mind or with aid of pen and paper.) update a prediction associated with the [environmental model] based on the selected model and the second input. (Mental Evaluation, Mental Process – Modifying a prediction based on a selection of a model and other data is practically performable in the mind or with aid of pen and paper. This is like setting up blocking for a play. For example, there could be a scene where someone is following a recipe and, upon realizing that the person would be blocked by someone else with a particular positioning based on the recipe, that the person be moved.) Claim 13 recites mental processes and, hence, under MPEP 2106.04(a)(2)(III), an abstract idea. Claim 13 recites an abstract idea. Step 2A, Prong 2 The claims fail to recite additional limitations that integrate the abstract idea into a practical application. Claim 1 Claim 1 recites the following additional limitations: receiving, […] a first input associated with a first event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types, The receiving steps are mere data gathering, which is insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” “iv. Obtaining information about transactions using the Internet to verify credit card transactions” “v. Consulting and updating an activity log” “vi. Determining the level of a biomarker in blood” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display.” The receiving steps are insignificant extra-solution activity, and, under MPEP 2106.05(g), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. from one or more sensors and at an interface associated with a digital twin, […] digital twin[…] […] a storage […] […] a data structure […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Claim 7 Claim 7 recites the following additional limitations: […] receive […] input associated with a new event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types The receive step is mere data gathering, which is insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” “iv. Obtaining information about transactions using the Internet to verify credit card transactions” “v. Consulting and updating an activity log” “vi. Determining the level of a biomarker in blood” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display.” The receive step is insignificant extra-solution activity, and, under MPEP 2106.05(g), fails to integrate the abstract idea into a practical application at Step 2A, Prong 2. A device, comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to: […] from one or more sensors and at an interface associated with a digital twin, […] digital twin[…] […] a storage […] […] a data structure […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Claim 13 Claim 13 recites the following additional limitations: […] receive […] a first input associated with a first event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types; […] receive a second input associated with the probable second event; The receive steps are mere data gathering, which is insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” “iv. Obtaining information about transactions using the Internet to verify credit card transactions” “v. Consulting and updating an activity log” “vi. Determining the level of a biomarker in blood” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display.” The receive steps are insignificant extra-solution activity, and, under MPEP 2106.05(g), fails to integrate the abstract idea into a practical application at Step 2A, Prong 2. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: […], from one or more sensors and at an interface associated with a digital twin, […] […] a storage […] […] a data structure […] […] digital twin […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Claims 1, 7 and 13 fail to recite any additional limitations that integrate the abstract idea into a practical application. Claims 1, 7, and 13 are directed to the abstract idea. Step 2B The claims fail to recite additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Claim 1 Claim 1 recites the following additional limitations: receiving, […] a first input associated with a first event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types, The receiving steps are well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “v. Electronically scanning or extracting data from a physical document” “i. Determining the level of a biomarker in blood by any means” The receiving steps are WURC and, as previously demonstrated, insignificant extra-solution activity, and, under MPEP 2106.05(d) and 2106.05(g), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. from one or more sensors and at an interface associated with a digital twin, […] digital twin[…] […] a storage […] […] a data structure […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claim 7 Claim 7 recites the following additional limitations: […] receive […] input associated with a new event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types The receive step is well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “v. Electronically scanning or extracting data from a physical document” “i. Determining the level of a biomarker in blood by any means” The receive step is WURC and, as previously demonstrated, insignificant extra-solution activity, and, under MPEP 2106.05(d) and 2106.05(g), fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. A device, comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to: […] from one or more sensors and at an interface associated with a digital twin, […] digital twin[…] […] a storage […] […] a data structure […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claim 13 Claim 13 recites the following additional limitations: […] receive […] a first input associated with a first event; […] receiving, from a [data source] associated with events, a data structure indicating a hierarchy of event types; […] receive a second input associated with the probable second event; The receive steps are well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “v. Electronically scanning or extracting data from a physical document” “i. Determining the level of a biomarker in blood by any means” The receive steps are WURC and, as previously demonstrated, insignificant extra-solution activity, and, under MPEP 2106.05(d) and 2106.05(g), fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: […], from one or more sensors and at an interface associated with a digital twin, […] […] a storage […] […] a data structure […] […] digital twin […] These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Should it be found otherwise, these limitations merely limit the abstract idea top a particular technological field and, under MPEP 2106.05(h), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claims 1, 7, and 13 lack additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Claims 1, 7, and 13 are ineligible. Dependent Claims: The dependent claims are also ineligible for the following reasons. Note: The hardware of the sensors, computing device, and CRM have already been addressed as failing to confer eligibility under MPEP 2106.05(f), and will not be further addressed with respect to the dependent claims. Also, claims identifying what data represents merely limits the abstract idea to a particular technological field and, under MPEP 2106.05(h), fails to confer eligibility. Claims 2, 8, and 14 transmit, to a user device, a visualization associated with the updated prediction. This is insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent.” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display” “Printing or downloading generated menus.” Therefore, under MPEP 2106.05(g), this does not integrate the abstract idea into a practical application at Step 2A, Prong 2. This is WURC activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.” Because this is WURC and insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(h), this limitation fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claims 2, 8, and 14 fail to provide any additional limitations that confer eligibility. Claims 2, 8, and 14 are ineligible. Claims 5 and 16 wherein determining that the first event is associated with the one or more probable second events comprises: inputting, to a machine learning model, the first input; and receiving, from the machine learning model, output indicating the one or more probable second events. This merely states that a generic machine learning model conducts the inference of the determining step of the respective independent claim. The inference is an element of the abstract idea for the same reasons as the respective determining step in the independent claims. The use of the machine learning model for the inference is the user of a generic computing element that, under MPEP 2106.05(f), fails to confer eligibility. Claims 5 and 16 fail to provide any additional limitations that confer eligibility. Claims 5 and 16 are ineligible. Claims 6 and 17 further comprising: filtering the first input in order to generate the updated prediction based on the second input. Filtering data to make a determination is an evaluation practically performable in the mind or with aid of pen and paper, so it is a mental, process, an element of the abstract idea. Claims 6 and 17 fail to provide any additional limitations that confer eligibility. Claims 6 and 17 are ineligible. Claims 9 and 18 wherein […], to select the model, are configured to: calculate a corresponding cost and a corresponding error for each model of the plurality of possible models; and Calculation of cost and error is both (1) an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, a mental process, and abstract idea; and (2) a mathematical calculation, a mathematical concept, an abstract idea. select the model based on the corresponding cost and the corresponding error for the model. Selection of a model based on data is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, a mental process, and abstract idea; These abstract idea elements merge with the abstract idea of the respective independent claims. Claims 9 and 18 fail to provide any additional limitations that confer eligibility. Claims 9 and 18 are ineligible. Claims 10 and 19 wherein the context associated with the current state of the digital twin comprises a location associated with the digital twin, a time associated with the digital twin, or a current function associated with the digital twin. This merely characterizes what data represents, which merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility. Claims 10 and 19 fail to provide any additional limitations that confer eligibility. Claims 10 and 19 are ineligible. Claims 11 and 20 wherein the context associated with the event comprises a location associated with the event, a time associated with the event, or a current function associated with the event. This merely characterizes what data represents, which merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility. Claims 11 and 20 fail to provide any additional limitations that confer eligibility. Claims 11 and 20 are ineligible. Claim 12 receive one or more additional inputs based on the selected model This receive step is WURC and insignificant extra-solution activity and fails to confer eligibility for the same reasons as the receive/receiving steps in the respective independent claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 10-17, and 19-20: Thiruvenkatanathan (THIR) Claim(s) 1-8, 10-17, and 19-20 is/are rejected under 35 U.S.C. 102(a1)(a2) as being anticipated by US 2024/0353825 A1 to Thiruvenkatanathan et al. (THIR). Claim 1 Regarding claim 1, THIR Teaches: A method, comprising: receiving, from one or more sensors and at an interface associated with a digital twin, a first input associated with a first event; receive, from one or more sensors and at an interface associated with a digital twin, a first input associated with a first event; (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. [0124] “As described with respect to FIG. 7 , the plurality of selections can be correlated with a plurality of workflows, where each workflow of the plurality of workflows can define a set of associated time series data elements in addition to other elements in some aspects. The time series data can originate with or be derived using any of the data sources described herein. For example, the time series data elements can be received from a plurality of sensors, a plurality of edge devices, or a combination thereof. Based on the plurality of selections from the users, a first workflow of the plurality of workflows can be identified. For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” [0127] “Once the first set of time series data elements and/or features are retrieved by the system, an event can be identified at step 752 using an anomaly detection process and/or event detection as described herein.” For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” [0125] “Once the first workflow is identified, a first set of time series data elements that are associated with the first workflow can be retrieved. In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.”– Sensor data is received and a workflow for an event is identified, along with similar “neighbor” workflows associated with other likely events. In some aspects, a plurality of models may be associated with a workflow. [0126] For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” – Multiple associated events can be identified, and a user can select the workflow most likely to address a root cause. That is, an overheating event (first event) can be associated with a bearing failure event (second event) and might call to address the current problem differently. The interface for receiving the data is associated with a digital twin that mimics the real world processes using the data.) determining that the first event is associated with one or more probable second events, (THIR [0126] For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” – Multiple associated events can be identified, and a user can select the workflow most likely to address a root cause. That is, an overheating event (first event) can be associated with a bearing failure event (second event) and might call to address the current problem differently.) wherein the determining that the first event is associated with one or more probable events comprises: receiving, from a storage associated with events, a data structure indicating a hierarchy of event types; and determine, based on the hierarchy of event types, the one or more probable second events. (THIR [0024] “The resulting associations between the data, the models, and the workflows of the users looking at data, running models, and finding outputs can be used to develop knowledge graphs over time, which can occur across users and locations. The knowledge graph can then define relationships between facts or data (e.g., the selected sensor data, event identifications, etc.) and knowledge such as the identification of events.” [0025] “The use of the associations between the models and data can allow for the contextual drive workflows to be captured as noted above. In some aspects, this can allow for an identification of the proper model to use in context based on the data being referenced by a user.” [0026] “The contextual workflows can also be used to identify thresholds for detecting events or anomalies within the data. The analysis of data across many users can establish clusters of information. The continued analysis and association of the clusters with models and results can be used to identify when an event is present and when an event is absent in the data. The system can then set thresholds for anomaly detection (e.g., detecting when an event is present even if not identifying a specific event) based on feedback from users. The thresholds may define values, ranges, and/or model outputs, any of which can be based on a plurality of time series data elements such as combination of data elements and the like as combinatorial readings. This information can also be included in a knowledge graph as part of the learnings of the system.” [0131] “Within the system using the workflows and feedback to update and tune the system, an optional knowledge encoding engine 856 can be used to encode the knowledge, for example, using a knowledge graph to store the corresponding information. In this process, the queries obtained from the users on the user interface 710 can be correlated between two or more uses as described herein. When the user queries across the users are determined to be correlated, the resulting queries can be identified as related, for example as determining a workflow as described herein. A knowledge graph can then be generated based on the correlation of the queries. A knowledge graph can generally define or contain facts and relationships. In some aspects, the time series data elements can represent facts, and the correlations between the time series data elements such as the correlations can represent relationships between the facts. In some aspects, the workflows can represent the relationships between time series data identified as being correlated between the workflows. The resulting knowledge graphs and/or relationships can be stored in the knowledge store 821. In some aspects, the resulting knowledge encoding can be used with the initial workflow selection process as part of the process 800.” [0122] “Once the system described with respect to FIG. 7 is implemented, the workflow neighbors can be used as part of a knowledge graphing and generation process within the system.” – A knowledge graph, a hierarchical representation of relationship data, is used to relate events and find related neighbor workflows. [0126] “Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” – The neighbor workflows indicate neighbor events that are associated by root causes, solution processes, and models. The overheating and bearing failures are examples of related events that are related in the knowledge graph, and their workflows are identified as related based on the knowledge graph.) refraining from processing the first input for a period of time, wherein the period of time is indicated by the data structure that indicates relationships between different types of events in the hierarchy of event types; and (THIR [0124] “For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” – The system will access all neighbor workflows associated each event that has sufficiently common data. This means that, some time will be spent processing a first event (e.g., overheating) and some time will be spent processing a second event (e.g., a bearing failure). This means that some time will be spent not processing the first event, which would last for a period of time. Also, there is no claimed order to the delay relative to the other steps in this claim. There was a very large delay to the processing of the first event prior to the occurrence of the first event. Also, [0120] “When presented, the user can select to view the recommended time series data and/or features, or the user can dismiss or ignore the recommendation. When the user elects to view the time series data and/or features, the information can be displayed on the user interface 710, and the correlation or similarity score for the time series data and/or features can be increased within the workflow neighbor group. Conversely, if the user dismisses or ignores the recommendation, the correlation or similarity score for the time series data and/or features can be decreased within the workflow neighbor group.“ - A user notices a little overheating, and the user ignores it. However, upon realizing that the overheating is a “neighbor” in data of bearing failure from neighbor workflows, the user decides to act on the more serious latter bearing issue after a delay from initially ignoring the overheating issue. This represents a delay in (refraining of) processing the data associated with the first event (overheating). FURTHER [0027] As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time. [0123] “As shown in FIG. 8 , the method 800 can begin with a plurality of users 8 interacting with a user interface 710. In addition to the interactions with users, the user interface can be in signal communication with the time series data source, a source (e.g., a database, etc.) of workflow neighbor classifications 854, and optionally a source of encoded knowledge such as a knowledge graph. “– All data is timestamped and saved to the database (a data structure) that also includes the knowledge graph. Also, time series data is sampled periodically, so processing is always delayed at least by the time between samples. The time between samples is “indicated” in the time series data by showing the time stamps with the intervals for sampling. FURTHER STILL, [0090] “In some embodiments, historical data on features obtained from the time series data, optionally along with historical selections and feedback, can be used to train the first machine learning model 410. Over time, the machine learning model 410 can be re-trained or updated using the received selection(s), and the re-trained machine learning model 410 can then re-identify one or more features in subsequent time series data that is received by the machine learning model 410. For example, the historical data set can be updated over time based on the newly received features, time series data, and selections. The updated historical data can then be used to update (e.g., re-train, adjust, etc.) the first machine learning model to take into account the new information. The updating of the first machine learning model can take place after each set of feedback occurs, periodically at defined intervals, or upon any other suitable trigger or triggering event. The updated historical data can be labeled data and include both the features, any identified feature sets, one or more time series data components, and potential outcomes, results, or solutions associated with the features and time series data.” [0125] “In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.” – The periodic retraining of the machine learning model or retraining in response to a stimulus illustrates that elements of the knowledge graph and times series data are processed in a delay, with the time series data indicating time and the periodicity also being based on time, and so it is “indicated.”) updating a prediction associated with the digital twin using the first input based on expiry of the period of time, or updating a prediction associated with the digital twin using second input associated with the one or more probable second events based on receiving the second input. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. Based on the time series data passing to the systems 858A as disclosed herein that can utilize feedback and inputs to train the various models, an output providing event identification and/or other information on the system. The system 858A can then be duplicated as a digital twin 858B. In general, a digital twin is a digital representation of an actual system or process. In some aspects, the digital twin 858B can be obtained based on the modeling and feedback mechanisms as described herein. The two systems 858A and 858B can be linked so that as the actual data, models, and feedback occurring within the main system 858A are updated, they can be similarly updated and copied over to the digital twin 858B. The two systems may then be essentially the same with regard to the models, knowledge encoding, and output formats.” [0087] “The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data.” – The selected model(s) can be used to update a potential predicted event as confirmed as the actual event. This would be reflected in the digital twin, as a digital mirror of the real scenario experienced. THIR teaches a number of scenarios that teach this feature, for example: (1) A user detects overheating based on temperature data. The system looks for neighbor workflows and brings up a bearing failure. The user is not convinced by acoustic data that it is a bearing issue and decides to address the overheating using a lubricant workflow. There was a delay (period of time) associated with considering the bearing alternative, but the system elects to go with the lubricant workflow associated with the first detected event, overheating. The update of the prediction is based on the expiry of the period because the prediction is delayed until at least that time elapses. (2) The same scenario but the user recognizes the bearing failure from the acoustic (second input) and temperature (first input) data and confirms there is a bearing failure (second event). (3) Those scenarios except that neither event is detected, and the determination that there is no issue is delayed by (based on) the period of time. (4) Any scenario in which the user ignores the first issue for a period of time.) Claim 7 Regarding claim 7, THIR Teaches: A device, comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to: (THIR [0135] “Any of the systems and methods disclosed herein can be carried out on a computer or other device comprising a processor. FIG. 9 illustrates a computer system 900 suitable for implementing one or more embodiments disclosed herein such as the acquisition device or any portion thereof. The computer system 900 includes a processor 782 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 784, read only memory (ROM) 786, random access memory (RAM) 788, input/output (I/O) devices 790, and network connectivity devices 792. The processor 782 may be implemented as one or more CPU chips.) receive, from one or more sensors and at an interface associated with a digital twin, input associated with a new event; (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. [0124] “As described with respect to FIG. 7 , the plurality of selections can be correlated with a plurality of workflows, where each workflow of the plurality of workflows can define a set of associated time series data elements in addition to other elements in some aspects. The time series data can originate with or be derived using any of the data sources described herein. For example, the time series data elements can be received from a plurality of sensors, a plurality of edge devices, or a combination thereof. Based on the plurality of selections from the users, a first workflow of the plurality of workflows can be identified. For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” [0127] “Once the first set of time series data elements and/or features are retrieved by the system, an event can be identified at step 752 using an anomaly detection process and/or event detection as described herein.” For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” [0125] “Once the first workflow is identified, a first set of time series data elements that are associated with the first workflow can be retrieved. In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.”– Sensor data is received and a workflow for an event is identified, along with similar “neighbor” workflows associated with other likely events. In some aspects, a plurality of models may be associated with a workflow. [0126] For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” – Multiple associated events can be identified, and a user can select the workflow most likely to address a root cause. That is, an overheating event (first event) can be associated with a bearing failure event (second event) and might call to address the current problem differently. The interface for receiving the data is associated with a digital twin that mimics the real world processes using the data.) wherein the determining that the new event is associated with the probable subsequent event comprises: receiving, from a storage associated with events, a data structure indicating a hierarchy of event types; and determine, based on the hierarchy of event types, the one or more probable second events. (THIR [0024] “The resulting associations between the data, the models, and the workflows of the users looking at data, running models, and finding outputs can be used to develop knowledge graphs over time, which can occur across users and locations. The knowledge graph can then define relationships between facts or data (e.g., the selected sensor data, event identifications, etc.) and knowledge such as the identification of events.” [0025] “The use of the associations between the models and data can allow for the contextual drive workflows to be captured as noted above. In some aspects, this can allow for an identification of the proper model to use in context based on the data being referenced by a user.” [0026] “The contextual workflows can also be used to identify thresholds for detecting events or anomalies within the data. The analysis of data across many users can establish clusters of information. The continued analysis and association of the clusters with models and results can be used to identify when an event is present and when an event is absent in the data. The system can then set thresholds for anomaly detection (e.g., detecting when an event is present even if not identifying a specific event) based on feedback from users. The thresholds may define values, ranges, and/or model outputs, any of which can be based on a plurality of time series data elements such as combination of data elements and the like as combinatorial readings. This information can also be included in a knowledge graph as part of the learnings of the system.” [0131] “Within the system using the workflows and feedback to update and tune the system, an optional knowledge encoding engine 856 can be used to encode the knowledge, for example, using a knowledge graph to store the corresponding information. In this process, the queries obtained from the users on the user interface 710 can be correlated between two or more uses as described herein. When the user queries across the users are determined to be correlated, the resulting queries can be identified as related, for example as determining a workflow as described herein. A knowledge graph can then be generated based on the correlation of the queries. A knowledge graph can generally define or contain facts and relationships. In some aspects, the time series data elements can represent facts, and the correlations between the time series data elements such as the correlations can represent relationships between the facts. In some aspects, the workflows can represent the relationships between time series data identified as being correlated between the workflows. The resulting knowledge graphs and/or relationships can be stored in the knowledge store 821. In some aspects, the resulting knowledge encoding can be used with the initial workflow selection process as part of the process 800.” [0122] “Once the system described with respect to FIG. 7 is implemented, the workflow neighbors can be used as part of a knowledge graphing and generation process within the system.” – A knowledge graph, a hierarchical representation of relationship data, is used to relate events and find related neighbor workflows. [0126] “Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” – The neighbor workflows indicate neighbor events that are associated by root causes, solution processes, and models. The overheating and bearing failures are examples of related events that are related in the knowledge graph, and their workflows are identified as related based on the knowledge graph.) refrain from processing the input for a period of time, wherein the period of time is indicated by the data structure that indicates relationships between different types of events in the hierarchy of event types; (THIR [0124] “For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” – The system will access all neighbor workflows associated each event that has sufficiently common data. This means that, some time will be spent processing a first event (e.g., overheating) and some time will be spent processing a second event (e.g., a bearing failure). This means that some time will be spent not processing the first event, which would last for a period of time. Also, there is no claimed order to the delay relative to the other steps in this claim. There was a very large delay to the processing of the first event prior to the occurrence of the first event. Also, [0120] “When presented, the user can select to view the recommended time series data and/or features, or the user can dismiss or ignore the recommendation. When the user elects to view the time series data and/or features, the information can be displayed on the user interface 710, and the correlation or similarity score for the time series data and/or features can be increased within the workflow neighbor group. Conversely, if the user dismisses or ignores the recommendation, the correlation or similarity score for the time series data and/or features can be decreased within the workflow neighbor group.“ - A user notices a little overheating, and the user ignores it. However, upon realizing that the overheating is a “neighbor” in data of bearing failure from neighbor workflows, the user decides to act on the more serious latter bearing issue after a delay from initially ignoring the overheating issue. This represents a delay in (refraining of) processing the data associated with the first event (overheating). FURTHER [0027] As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time. [0123] “As shown in FIG. 8 , the method 800 can begin with a plurality of users 8 interacting with a user interface 710. In addition to the interactions with users, the user interface can be in signal communication with the time series data source, a source (e.g., a database, etc.) of workflow neighbor classifications 854, and optionally a source of encoded knowledge such as a knowledge graph. “– All data is timestamped and saved to the database (a data structure) that also includes the knowledge graph. Also, time series data is sampled periodically, so processing is always delayed at least by the time between samples. The time between samples is “indicated” in the time series data by showing the time stamps with the intervals for sampling. FURTHER STILL, [0090] “In some embodiments, historical data on features obtained from the time series data, optionally along with historical selections and feedback, can be used to train the first machine learning model 410. Over time, the machine learning model 410 can be re-trained or updated using the received selection(s), and the re-trained machine learning model 410 can then re-identify one or more features in subsequent time series data that is received by the machine learning model 410. For example, the historical data set can be updated over time based on the newly received features, time series data, and selections. The updated historical data can then be used to update (e.g., re-train, adjust, etc.) the first machine learning model to take into account the new information. The updating of the first machine learning model can take place after each set of feedback occurs, periodically at defined intervals, or upon any other suitable trigger or triggering event. The updated historical data can be labeled data and include both the features, any identified feature sets, one or more time series data components, and potential outcomes, results, or solutions associated with the features and time series data.” [0125] “In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.” – The periodic retraining of the machine learning model or retraining in response to a stimulus illustrates that elements of the knowledge graph and times series data are processed in a delay, with the time series data indicating time and the periodicity also being based on time, and so it is “indicated.”) determine that the event triggers an update for a prediction associated with the digital twin, (THIR [0035] “Overall, the system described herein allows for the interactions of a plurality of users with an application interface to be used to identify and isolate workflows or patterns based on user inputs and/or selections (e.g., any type of feedback), recommend selections for the user(s) based on prior input selections from the plurality of users using the same or similar workflows, and/or automatically drive or trigger correlations between selected time series data components or traces based on identified workflows.” [0044] During use, the user may select various information based on the presentation of the information including the time series data, the features, and/or the indications of the anomalies or events. For example, an alarm or alert may be triggered by the time series data and/or features. In response, a user may select various time series data streams from certain sensors to try to diagnose the cause of the alarm or alert. The selections of the specific data streams can be considered feedback from the user.” – The system can include an alarm that triggers when data indicates that a prediction/status needs to be updated for the system and its digital twin.) select a model, from a plurality of possible models, based on a context associated with a current state of the digital twin or a context associated with the event; and (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein.” [0126] “In some aspects, a model selection step 850 can be optionally carried out. In the model selection process, the user selections, any identified workflow neighbors, the corresponding time series data and/or features, and optionally any known relationships as identified by the knowledge encoding 856 step can be passed to the model selection step. In some aspects, a plurality of models can be associated with the system, which can be stored in storage 811. Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” [0113] “The resulting correlation or similarity scores can be compared to a similarity score threshold or thresholds to determine if the correlations represent the same or similar workflows, as described in more detail below. In some aspects, the correlation or similarity scores can be determined using normalized correlation ratings based on a number of implicit and explicit correlations between pairs of users. For example, when there are four sensor calls, a match (e.g., an explicit or implicit correlation) of three of the four sensor calls could result in a correlation score of 0.75. Other correlation scoring can be used such as the use of Pearson's coefficient based collaborative filtering to provide similarity ratings based on the implicit and explicit correlations. This process can include computing pairwise correlation between implicit and explicit scores of each user using rows with no missing values. The resulting correlated workflows can be stored in the workflow neighbor database 721.” – The neighbor workflows in the workflow neighbor database provide a neighbor context that is associated with one or both of a current state of a digital twin (e.g., overheating caused by bearing failure) or a context that associates the first event with the second event, that similarity that makes them neighbors. Based on one or more of these, one or more models are selected.) update the prediction associated with the digital twin based on the selected model and the input. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. Based on the time series data passing to the systems 858A as disclosed herein that can utilize feedback and inputs to train the various models, an output providing event identification and/or other information on the system. The system 858A can then be duplicated as a digital twin 858B. In general, a digital twin is a digital representation of an actual system or process. In some aspects, the digital twin 858B can be obtained based on the modeling and feedback mechanisms as described herein. The two systems 858A and 858B can be linked so that as the actual data, models, and feedback occurring within the main system 858A are updated, they can be similarly updated and copied over to the digital twin 858B. The two systems may then be essentially the same with regard to the models, knowledge encoding, and output formats.” [0087] “The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data.” – The selected model(s) can be used to update a potential predicted event as confirmed as the actual event. This would be reflected in the digital twin, as a digital mirror of the real scenario experienced.) Claim 13 Regarding claim 13, THIR teaches: A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: (THIR [0135] “Any of the systems and methods disclosed herein can be carried out on a computer or other device comprising a processor. FIG. 9 illustrates a computer system 900 suitable for implementing one or more embodiments disclosed herein such as the acquisition device or any portion thereof. The computer system 900 includes a processor 782 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 784, read only memory (ROM) 786, random access memory (RAM) 788, input/output (I/O) devices 790, and network connectivity devices 792. The processor 782 may be implemented as one or more CPU chips.) receive, from one or more sensors and at an interface associated with a digital twin, a first input associated with a first event; (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. [0124] “As described with respect to FIG. 7 , the plurality of selections can be correlated with a plurality of workflows, where each workflow of the plurality of workflows can define a set of associated time series data elements in addition to other elements in some aspects. The time series data can originate with or be derived using any of the data sources described herein. For example, the time series data elements can be received from a plurality of sensors, a plurality of edge devices, or a combination thereof. Based on the plurality of selections from the users, a first workflow of the plurality of workflows can be identified. For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” [0127] “Once the first set of time series data elements and/or features are retrieved by the system, an event can be identified at step 752 using an anomaly detection process and/or event detection as described herein.” For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” [0125] “Once the first workflow is identified, a first set of time series data elements that are associated with the first workflow can be retrieved. In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.”– Sensor data is received and a workflow for an event is identified, along with similar “neighbor” workflows associated with other likely events. In some aspects, a plurality of models may be associated with a workflow. [0126] For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” – Multiple associated events can be identified, and a user can select the workflow most likely to address a root cause. That is, an overheating event (first event) can be associated with a bearing failure event (second event) and might call to address the current problem differently. The interface for receiving the data is associated with a digital twin that mimics the real world processes using the data.) determine that the first event is associated with a probable second event (THIR [0126] For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow.” – Multiple associated events can be identified, and a user can select the workflow most likely to address a root cause. That is, an overheating event (first event) can be associated with a bearing failure event (second event) and might call to address the current problem differently.) wherein determining that the first event is associated with the probable second event comprises; receiving, from a storage associated with events, a data structure indicating a hierarchy of event types, and determining, based on the hierarchy of event types, the probable second event; (THIR [0024] “The resulting associations between the data, the models, and the workflows of the users looking at data, running models, and finding outputs can be used to develop knowledge graphs over time, which can occur across users and locations. The knowledge graph can then define relationships between facts or data (e.g., the selected sensor data, event identifications, etc.) and knowledge such as the identification of events.” [0025] “The use of the associations between the models and data can allow for the contextual drive workflows to be captured as noted above. In some aspects, this can allow for an identification of the proper model to use in context based on the data being referenced by a user.” [0026] “The contextual workflows can also be used to identify thresholds for detecting events or anomalies within the data. The analysis of data across many users can establish clusters of information. The continued analysis and association of the clusters with models and results can be used to identify when an event is present and when an event is absent in the data. The system can then set thresholds for anomaly detection (e.g., detecting when an event is present even if not identifying a specific event) based on feedback from users. The thresholds may define values, ranges, and/or model outputs, any of which can be based on a plurality of time series data elements such as combination of data elements and the like as combinatorial readings. This information can also be included in a knowledge graph as part of the learnings of the system.” [0131] “Within the system using the workflows and feedback to update and tune the system, an optional knowledge encoding engine 856 can be used to encode the knowledge, for example, using a knowledge graph to store the corresponding information. In this process, the queries obtained from the users on the user interface 710 can be correlated between two or more uses as described herein. When the user queries across the users are determined to be correlated, the resulting queries can be identified as related, for example as determining a workflow as described herein. A knowledge graph can then be generated based on the correlation of the queries. A knowledge graph can generally define or contain facts and relationships. In some aspects, the time series data elements can represent facts, and the correlations between the time series data elements such as the correlations can represent relationships between the facts. In some aspects, the workflows can represent the relationships between time series data identified as being correlated between the workflows. The resulting knowledge graphs and/or relationships can be stored in the knowledge store 821. In some aspects, the resulting knowledge encoding can be used with the initial workflow selection process as part of the process 800.” [0122] “Once the system described with respect to FIG. 7 is implemented, the workflow neighbors can be used as part of a knowledge graphing and generation process within the system.” – A knowledge graph, a hierarchical representation of relationship data, is used to relate events and find related neighbor workflows. [0126] “Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” – The neighbor workflows indicate neighbor events that are associated by root causes, solution processes, and models. The overheating and bearing failures are examples of related events that are related in the knowledge graph, and their workflows are identified as related based on the knowledge graph.) refrain from processing the first input for a period of time, wherein the period of time is indicated by the data structure that indicates the relationships between different types of events in the hierarchy of event types; (THIR [0124] “For example, the plurality of selections can be used with the workflow neighbor classifications 854, which can comprise the plurality of workflows that are identified, to identify at least a first workflow. As more workflows are identified over time, the new workflows can be stored in the workflow neighbor classifications 854 and accessed based on the plurality of selections.” – The system will access all neighbor workflows associated each event that has sufficiently common data. This means that, some time will be spent processing a first event (e.g., overheating) and some time will be spent processing a second event (e.g., a bearing failure). This means that some time will be spent not processing the first event, which would last for a period of time. Also, there is no claimed order to the delay relative to the other steps in this claim. There was a very large delay to the processing of the first event prior to the occurrence of the first event. Also, [0120] “When presented, the user can select to view the recommended time series data and/or features, or the user can dismiss or ignore the recommendation. When the user elects to view the time series data and/or features, the information can be displayed on the user interface 710, and the correlation or similarity score for the time series data and/or features can be increased within the workflow neighbor group. Conversely, if the user dismisses or ignores the recommendation, the correlation or similarity score for the time series data and/or features can be decreased within the workflow neighbor group.“ - A user notices a little overheating, and the user ignores it. However, upon realizing that the overheating is a “neighbor” in data of bearing failure from neighbor workflows, the user decides to act on the more serious latter bearing issue after a delay from initially ignoring the overheating issue. This represents a delay in processing the data associated with the first event (overheating). FURTHER [0027] As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time. [0123] “As shown in FIG. 8 , the method 800 can begin with a plurality of users 8 interacting with a user interface 710. In addition to the interactions with users, the user interface can be in signal communication with the time series data source, a source (e.g., a database, etc.) of workflow neighbor classifications 854, and optionally a source of encoded knowledge such as a knowledge graph. “– All data is timestamped and saved to the database (a data structure) that also includes the knowledge graph. Also, time series data is sampled periodically, so processing is always delayed at least by the time between samples. The time between samples is “indicated” in the time series data by showing the time stamps with the intervals for sampling. FURTHER STILL, [0090] “In some embodiments, historical data on features obtained from the time series data, optionally along with historical selections and feedback, can be used to train the first machine learning model 410. Over time, the machine learning model 410 can be re-trained or updated using the received selection(s), and the re-trained machine learning model 410 can then re-identify one or more features in subsequent time series data that is received by the machine learning model 410. For example, the historical data set can be updated over time based on the newly received features, time series data, and selections. The updated historical data can then be used to update (e.g., re-train, adjust, etc.) the first machine learning model to take into account the new information. The updating of the first machine learning model can take place after each set of feedback occurs, periodically at defined intervals, or upon any other suitable trigger or triggering event. The updated historical data can be labeled data and include both the features, any identified feature sets, one or more time series data components, and potential outcomes, results, or solutions associated with the features and time series data.” [0125] “In some aspects, each workflow can comprise metadata associated with or identifying the time series data elements and/or features associated with the workflow neighbors. The metadata can be used to call the associated time series data elements and/or features. In some aspects, the workflow can track the first set of time series data elements and display the first set of time series data elements and/or features once the first workflow is identified.” – The periodic retraining of the machine learning model or retraining in response to a stimulus illustrates that elements of the knowledge graph and times series data are processed in a delay, with the time series data indicating time and the periodicity also being based on time, and so it is “indicated.”) receive a second input associated with the probable second event; (THIR [0028] “Similarly, wear in a train wheel bearing can be determined based on temperature sensor data along with acoustic information for the wheel.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model.” – The system receives acoustic data providing an indication that the overheating is caused by bearing failures.) select a model, from a plurality of possible models, based on a context associated with a current state of the digital twin or a context associated with the probable second event; and (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. [0126] “In some aspects, a model selection step 850 can be optionally carried out. In the model selection process, the user selections, any identified workflow neighbors, the corresponding time series data and/or features, and optionally any known relationships as identified by the knowledge encoding 856 step can be passed to the model selection step. In some aspects, a plurality of models can be associated with the system, which can be stored in storage 811. Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” [0113] “The resulting correlation or similarity scores can be compared to a similarity score threshold or thresholds to determine if the correlations represent the same or similar workflows, as described in more detail below. In some aspects, the correlation or similarity scores can be determined using normalized correlation ratings based on a number of implicit and explicit correlations between pairs of users. For example, when there are four sensor calls, a match (e.g., an explicit or implicit correlation) of three of the four sensor calls could result in a correlation score of 0.75. Other correlation scoring can be used such as the use of Pearson's coefficient based collaborative filtering to provide similarity ratings based on the implicit and explicit correlations. This process can include computing pairwise correlation between implicit and explicit scores of each user using rows with no missing values. The resulting correlated workflows can be stored in the workflow neighbor database 721.” – The neighbor workflows in the workflow neighbor database provide a neighbor context that is associated with one or both of a current state of a digital twin (e.g., overheating caused by bearing failure) or a context that associates the first event with the second event, that similarity that makes them neighbors. Based on one or more of these, one or more models are selected.) update a prediction associated with the digital twin based on the selected model and the second input. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein. Based on the time series data passing to the systems 858A as disclosed herein that can utilize feedback and inputs to train the various models, an output providing event identification and/or other information on the system. The system 858A can then be duplicated as a digital twin 858B. In general, a digital twin is a digital representation of an actual system or process. In some aspects, the digital twin 858B can be obtained based on the modeling and feedback mechanisms as described herein. The two systems 858A and 858B can be linked so that as the actual data, models, and feedback occurring within the main system 858A are updated, they can be similarly updated and copied over to the digital twin 858B. The two systems may then be essentially the same with regard to the models, knowledge encoding, and output formats.” [0087] “The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data.” – The selected model(s) can be used to update a potential predicted event as confirmed as the actual event. This would be reflected in the digital twin, as a digital mirror of the real scenario experienced.) Claims 2, 8, and 14 Regarding claims 2, 8, and 14, THIR teaches the features of the respective independent claims and further teaches: transmit/ting, to a user device, a visualization associated with the updated prediction. (THIR [0119] “Any of the processes to present and display recommended time series data and/or features as described herein can be used with the user interface 710 to present additional information associated with the workflow neighbor.” [0071] “When one or more features of the feature set are being displayed, the remaining features or information about the event can also be displayed. For example, if one or more frequency domain features obtained from the acoustic signal are used to determine the presence of sand ingress at a location within the wellbore, one or more additional features such as other frequency domain features, a pressure signal, and/or a temperature feature can also be determined to be part of the feature set and displayed or recommended for display on the application interface 220. If a feature such as a temperature feature is displayed and feedback from the user closes the display, this can be seen as an indication to the second machine learning model 230 that the identified temperature feature may not be properly part of the feature set.” [0046] “In some embodiments, the additional information associated with the workflow may be automatically displayed. The workflow can also comprise an order of presentation of the information, a layout of the information or the like, which can be provided to the user.” [0127] “Within this process, the models, anomalies, and or events can be displayed on the user interface 710, and user feedback can be generated as described herein. The resulting feedback can be used to identify if the appropriate workflow is selected by the system and if the event exists. The feedback can then be used to automatically update the system as described herein.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” – The system is configured to transmit to the user device a visualization associated with the updated prediction.) Claims 5 and 16 Regarding claims 5 and 16, THIR teaches the features of the respective claims from which claims 5 and 16 depend and further teaches: (wherein the one or more instructions, that cause the device to determine that the first event is associated with a probable second event, cause the device to:/wherein determining that the first event is associated with the one or more probable second events comprises:) (THIR [0135] “Any of the systems and methods disclosed herein can be carried out on a computer or other device comprising a processor. FIG. 9 illustrates a computer system 900 suitable for implementing one or more embodiments disclosed herein such as the acquisition device or any portion thereof. The computer system 900 includes a processor 782 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 784, read only memory (ROM) 786, random access memory (RAM) 788, input/output (I/O) devices 790, and network connectivity devices 792. The processor 782 may be implemented as one or more CPU chips.) input/ting, to a machine learning model, the first input; and receive/ing, from the machine learning model, output indicating the one or more probable second events. (THIR [0054] “The second machine learning model 130 can be configured to generate one or more recommendations for a time series data component, feature, or indicator of an anomaly as an output of the model based on the one or more selections that are received as input to the second machine learning model 130. As described herein, the features can be generated by functions or models within the system using the time series data, and indicators of anomalies or events can determined from the time series data and/or features. The recommendations can be for features generated by the system that are correlated to the current workflow obtained through feedback in the application interface. This can include features that correlate to those features and/or time series data components being displayed, even if the feedback has not requested the features and/or time series data components. The recommendations can represent insights into additional features or data that may be related but may not be apparent to a user as being related or part of a problem within the setting in which the time series data is being provided. Any of the recommendations generated as output by the second machine learning model 130 can be sent to the application interface 110.” [0070] “In some embodiments, the second machine learning model 230 can also determine feature sets, which can represent features and/or time series data components that are related. The feature sets can be determined using similarity scores and/or using first principles models. The second machine learning model 230 can initially base feature sets using the similarity scores and/or the first principle models and identify the features as being related. The features within the feature sets can be used in presenting or recommending additional features as part of the output of the second machine learning model 230. The feedback can then be used to verify that the features within the feature sets are related. For example, if a feature is identified as being part of a feature set and is presented or recommended for viewing on the application interface, but the feedback consistently indicates that the feature is not related to the other features in the feature set, the second machine learning model 230 can determine that the feature is not part of the feature set. Additional features can also be identified as being part of a feature set based on user feedback even if the initial similarity scores and/or first principles models do not identify the feature as part of a feature set. Depending on the amount of data in the time series data, a plurality of feature sets can be identified within the time series data and/or the features obtained based on the time series data. Any given feature can be part of one or more feature sets identified by the system.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” – THIR teaches a determination that a first event is associated with a second event based on inputting the first event (e.g., a temperature measurement) is associated with a second event (e.g., a bearing failure).) Claims 6 and 17 Regarding claims 6 and 17, THIR teaches the features of the respective claims from which claims 6 and 17 depend and further teaches: filter/ing the first input in order to generate the updated prediction based on the second input. (THIR [0120] “When presented, the user can select to view the recommended time series data and/or features, or the user can dismiss or ignore the recommendation. When the user elects to view the time series data and/or features, the information can be displayed on the user interface 710, and the correlation or similarity score for the time series data and/or features can be increased within the workflow neighbor group. Conversely, if the user dismisses or ignores the recommendation, the correlation or similarity score for the time series data and/or features can be decreased within the workflow neighbor group. This allows feedback in the form of user interactions to further strengthen the correlation or similarity scores to help define the workflow neighbor definitions. Once the correlation and similarity scores are updated, they can be stored in the workflow neighbor database 721.” – Ignoring an input is filtering an input. This claim is taught by the afore-demonstrated scenario in which the first overheating is ignored (filtered) until a subsequent detection of acoustic anomaly. Also, [0064] “The output of the sensors can be provided to the first machine learning model 210 as a time series data stream. Within the first machine learning model 210, one or more functions or models can be performed to derive features such as statistical features from the time series data. The time series data can be pre-processed using various techniques such as denoising, filtering, and/or transformations to provide data that can be processed to provide the features.” - The data can be filtered for clarity to better generate the updated prediction based on the second input.) Claims 10 and 19 Regarding claims 10 and 19, THIR teaches the features of the respective claims from which claims 10 and 19 depend and further teaches: wherein the context associated with the current state of the digital twin comprises a location associated with the digital twin, a time associated with the digital twin, or a current function associated with the digital twin. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” [0027] “As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time.” – An exemplary current function of the system in THIR is the use of a train and its digital twin. This would also involve position/location information of the train, and time data associated with any element of the train.) Claims 10 and 19 Regarding claims 10 and 19, THIR teaches the features of the respective claims from which claims 10 and 19 depend and further teaches: wherein the context associated with the current state of the digital twin comprises a location associated with the digital twin, a time associated with the digital twin, or a current function associated with the digital twin. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” [0027] “As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time.” – An exemplary current function of the system in THIR is the use of a train and its digital twin. This would also involve position/location information of the train, and time data associated with any element of the train.) Claim 11 Regarding claim 11, THIR teaches the features of claim 7 and further teaches: wherein the context associated with the event comprises a location associated with the event, a time associated with the event, or a current function associated with the event. (THIR [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” [0027] “As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time.” – An exemplary current function of the system in THIR is the use of a train and its digital twin in the context of a train on rails experiencing an event (e.g., overheating). This would also involve position/location information of the train, and time data associated with any element of the train.) Claim 12 Regarding claim 12, THIR teaches the features of claim 7 and further teaches: wherein the one or more processors are further configured to: receive one or more additional inputs based on the selected model. (THIR [0031] “In addition, the models can consider all of the available features to determine which ones may be related. By observing the user feedback, the related features can be correlated and presented to a user either as a related feature or a recommendation for a related feature. Based on the continued user feedback, the system can learn which features are properly related and which features, even if appearing to be related, are not related in certain situations. As described herein, the system can make initial recommendations as users start to use the system, or the system can rely on user feedback to define the feature sets (e.g., related time series data, features, or the like). In these embodiments, the user feedback can be used as input along with the time series data and/or features to train a model to identify the time series data and/or features as members of a feature set. In some embodiments, the feedback can be used to label the input data (e.g., the time series data and/or the feature sets), and the labeled data can then be used to train the model(s). The model(s) can be trained over time or retrained as the user feedback is obtained, which may provide an up to date model as a plurality of users use the system over time.” [0053] “In some embodiments, the second machine learning model 130 can receive one or more selections from the application interface 110 as input, for example, via the machine learning encoder 115. In some embodiments, the selections received as input by the first machine learning model 120 and the selections received as input by the second machine learning model 130 are the same selections; alternatively, the application interface 110 and the machine learning encoder 115 can be configured to send a first set of selections as input to the first machine learning model 120 and a second set of selections as input to the second machine learning model 130, where the first and second sets do not include any of the same selections; alternatively, the application interface 110 and the machine learning encoder 115 can be configured to send a first set of selections as input to the first machine learning model 120 and a second set of selections as input to the second machine learning model 130, where the first and second sets have at least one selection in common.” [0126] “In some aspects, a model selection step 850 can be optionally carried out. In the model selection process, the user selections, any identified workflow neighbors, the corresponding time series data and/or features, and optionally any known relationships as identified by the knowledge encoding 856 step can be passed to the model selection step. In some aspects, a plurality of models can be associated with the system, which can be stored in storage 811. Based on the data passed to the model selection step, one or more models associated with the time series data elements and/or features, the workflow neighbors, and/or the encoded knowledge can be identified. In some aspects, each workflow may be associated with a single model such as a single event for a process, which can be selected based on the identified workflow passed to the model selection step. In some aspects, a plurality of models may be associated with a workflow. For example, a workflow for diagnosing a bearing on a vehicle may be associated with models for bearing failures, overheating, and lubricant leaks in order to identify potentially multiple problems with the bearing based on the bearing diagnostic workflow. In this aspect, a plurality of models associated with one or more workflows can be retrieved from the model storage 811. When one or more models are selected in the model selection step 850, any data relied upon by the one or more model can be obtained for use with the model. The data may already be passed to the model selection step, or a separate request to the time series data source can be made to retrieve and/or determine the appropriate time series data elements and/or features.” – THIR teaches receiving many additional inputs based on the inputs the models are designed to ingest.) Claim 20 Regarding claim 20, THIR teaches the features of claim 13 and further teaches: wherein the context associated with the probable second event comprises a location associated with the probable second event, a time associated with the probable second event, or a current function associated with the probable second event. (THIR [0133] “As shown in FIG. 8B, the use of the system 858 to actively monitor a system or process using any of the embodiments disclosed herein can be used to provide a digital twin for testing, evaluation, and prediction. Within these embodiments, the time series data can be the same and/or originate from any of the same sources as described herein.” [0087] “As another example in the transportation context, the time series data can comprise data from one or more sensors associated with a train, which can include acoustic data, temperature sensors, location sensors, or the like. The time series data can be provided to the first machine learning model 310 to determine the presence of one or more events or anomalies associated with the train, such as the status of the wheel bearings. The resulting event identifications can be provided to the application interface along with one or more time series data components. For example, the acoustic data associated with the wheel bearings can be displayed along with one or more temperature sensors. Based on the feedback from a user through the application interface 320, the presence of an event such as an anticipated wheel bearing failure can be confirmed as well as any associated features within the time series data. The resulting feedback can be passed to the second machine learning model 330. For example, an identification of the anticipated wheel bearing failure along with associated time series data such as the corresponding acoustic data and/or temperature data and the like can be provided as inputs to the second machine learning model. The second machine learning model can then use the set of features and events to identify similar occurrences in historical data. For example, a feature set can be identified along with past occurrences involving the feature set. The historical data can then be examined to identify a prediction of the time to failure for the wheel bearing based on the same or similar set of features. The model can then provide an estimate of the time to failure along with potential maintenance or other actions that could extend the time to failure. The resulting actions can then be recommended or presented on the application interface. Multiple solutions (e.g., multiple options for maintenance, repairs, etc.) may be possible simply based on one of the features or events, and the remaining features can be used to identify the closest solution. For example, an identified wheel bearing failure at a given location may be caused by a first cause when a correlated acoustic reading is within a first range, and correlated to a second cause when the acoustic reading is within a second range or rate of change. The system and the second machine learning model may consider all of the related features in finding the solution and/or predictive maintenance schedule for the wheel bearing failure, thereby improving diagnostic workflows as well as providing improved resolutions or work plans for correcting any issues with the train.” [0027] “As used herein, the term “time series data” refers to data that is collected over time and can be labeled (e.g., timestamped) such that the particular time which the data value is collected is associated with the data value. “Time series data” can be displayed to a user and updated periodically to show new time series data along with historical time series data over a corresponding time period. Examples of time series data can include any sensor inputs output over time, derivatives of sensor data, combinations of sensor data, model outputs derived from sensor data, or other time based data inputs, observed data (e.g., healthcare diagnosis, lab testing, etc.), or any other data entered over time.” – An exemplary context of the system in THIR is the use of a train and its digital twin, and with respect to a second event (e.g., a bearing failure). This would also involve position/location information of the train, and time data associated with any element of the train.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9 and 18: THIR and Straat Claim(s) 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0353825 A1 to Thiruvenkatanathan et al. (THIR) in view of NPL: “Supervised learning in the presence of concept drift: a modeling framework” by Straat et al. Claims 9 and 18 Regarding claims 9 and 18, THIR teaches the features of the respective claims from which claims 9 and 18 depend and further teaches: (wherein the one or more processors, to select the model, are configured to:/ wherein the one or more instructions, that cause the device to select the model, cause the device to:) (THIR [0135] “Any of the systems and methods disclosed herein can be carried out on a computer or other device comprising a processor. FIG. 9 illustrates a computer system 900 suitable for implementing one or more embodiments disclosed herein such as the acquisition device or any portion thereof. The computer system 900 includes a processor 782 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 784, read only memory (ROM) 786, random access memory (RAM) 788, input/output (I/O) devices 790, and network connectivity devices 792. The processor 782 may be implemented as one or more CPU chips.) THIR likely implies (THIR [0074] “The first machine learning model 210 and/or the second machine learning model 230 can be trained using supervised or unsupervised learning techniques.”– THIR teaches supervised learning which involves determining error between the labeled data and the predictions generated and applying a cost function that collects the error to reflect loss to be backpropagated through the weights of the machine learning model. See Wikipedia NPL: “Supervised Learning,” under Approaches and algorithms, ‘Backpropagation’; and “Backpropagation,” under Overview, this illustrates using error to calculate loss that is backpropagated through the model to train the model.), but appears to fail to explicitly teach, but THIR in view of Stratt teaches: calculate a corresponding cost and a corresponding error for each model of the plurality of possible models; and select the model based on the corresponding cost and the corresponding error for the model. (Stratt Page 104 2.2.1 “The training of a neural network with real-valued output […] for a regression problem is frequently guided by the quadratic deviation of the network output from the target values [15, 22, 23] . It serves as a cost function which evaluates the network performance with respect to a single example as PNG media_image1.png 76 621 media_image1.png Greyscale In stochastic or on-line gradient descent, updates of the weight vectors are based on the presentation of a single example at time step μ PNG media_image2.png 85 644 media_image2.png Greyscale where the gradient is evaluated in w k u - 1 . […] PNG media_image3.png 114 647 media_image3.png Greyscale - This determines the error (y – τ in Equation 9) and cost function (eu of equation 9) that is backpropagated through the machine learning model to train the machine learning model until a condition is met (e.g., cost or error threshold) that selects the machine learning model for use.) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claims to modify the generic mention of supervised learning in THIR by the specific training methods of Stratt because the person of ordinary skill in the art would be motivated by the aim of THIR to consistently update the training of the machine learning model for changes to look to Stratt that teaches methods for effectively correcting/retraining models for account for model performance drift over time. (THIR [0033] “The corresponding time series data and/or features can then be labeled with the identified problem and used to retrain or update the machine learning model. This feedback cycle can then serve to provide an improved model used to identify problems and/or solutions for future identifications.” [0046] “ The resulting feedback can be used to retain or update the model as an input or through labeling of the data. For example, the feedback may indicate that a specific piece of information is not desired by the user, which may indicate that the model has selected an incorrect workflow based on the available information. The feedback can then be used to further refine the first machine learning model for future occurrences of the specific set of information.” [0048] “In some embodiments, the first machine learning model 120 can be retrained or updated with each received feedback signal. This can create a dynamic signal that can update the system while the user is using the system.” [0061] “Any feedback received as part of the workflow presentation can be used to verify that the specific data and/or features are related such that the feedback can be used to label the data and update the training data to include the new information. The model can then be refined based on the new labeled data in addition to the original training data. The system can then learn and present the workflows as well as updating the system to self-learn and update the data used with the system.” [0073] “The first machine learning model 210 can be configured to update itself using the received selections and identify, using the updated first machine learning model 210 a second set of features of the time series data (e.g., a second anomaly).” [0090] “Over time, the machine learning model 410 can be re-trained or updated using the received selection(s), and the re-trained machine learning model 410 can then re-identify one or more features in subsequent time series data that is received by the machine learning model 410. For example, the historical data set can be updated over time based on the newly received features, time series data, and selections. The updated historical data can then be used to update (e.g., re-train, adjust, etc.) the first machine learning model to take into account the new information. The updating of the first machine learning model can take place after each set of feedback occurs, periodically at defined intervals, or upon any other suitable trigger or triggering event. The updated historical data can be labeled data and include both the features, any identified feature sets, one or more time series data components, and potential outcomes, results, or solutions associated with the features and time series data.”; Stratt Abstract “We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student–teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift processes while the training is performed. Different types of concept drift are studied, which affect the density of example inputs only, the target rule itself, or both. By applying methods from statistical physics, we develop a modelling framework for the mathematical analysis of the training dynamics in non-stationary environments. Our results show that standard LVQ algorithms are already suitable for the training in non-stationary environments to a certain extent. However, the application of weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes. Furthermore, we investigate gradient-based training of layered neural networks with sigmoidal activation functions and compare with the use of rectified linear units. Our findings show that the sensitivity to concept drift and the effectiveness of weight decay differs significantly between the two types of activation function.”) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (From Prior Office Action) US 2023/0367992 A1 to Chakravarthy et al. (Teaches using knowledge graphs with machine learning models) US 2021/0271877 A1 to Tran et al. (Teaches using ontological knowledge databases akin to knowledge graphs for training and inference with machine learning models) US 2022/0075515 A1 to Floren et al. (Teaches machine learning techniques using ontologies akin to knowledge graphs for training and inference with machine learning models) US 2023/0394198 A1 to Mukherjee et al. (Teaches machine learning techniques using ontologies akin to knowledge graphs for training and inference with machine learning models in a digital twin environment) US 2023/0161934 A1 to Ganesan et al. (Teaches machine learning techniques using ontologies akin to knowledge graphs for training and inference with machine learning models) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571) 272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.W./Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

Jul 19, 2022
Application Filed
Nov 01, 2025
Non-Final Rejection — §101, §102, §103
Jan 12, 2026
Response Filed
Feb 06, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month