Prosecution Insights
Last updated: April 19, 2026
Application No. 17/884,760

SYSTEMS AND METHODS FOR AI INFERENCE PLATFORM

Final Rejection §101§103
Filed
Aug 10, 2022
Examiner
PHAM, JESSICA THUY
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Palantir Technologies Inc.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/Status of Claims Claims 1, 8, 15, and 20 were amended. Claims 1-20 are pending and examined herein. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Response to Arguments Applicant’s arguments, see page 7, filed 12/15/2025, with respect to the 35 U.S.C. 112(b) rejection of claims 1-20 have been fully considered and are persuasive. The 35 U.S.C. 112(b) rejection of claims 1-20 has been withdrawn. Applicant's arguments filed 12/15/2025 regarding the 35 U.S.C. 101 rejection of claims 1-20 have been fully considered but they are not persuasive. Applicant argues, see pages 8-10, "In this regard, Applicant respectfully submits that "receiving, via a first data interface of g first model orchestrator of the one or more model orchestrators, sensor data from a second model orchestrator having a second data interface," "executing, on the first computing device, at least one of the plurality of models according to the model pipeline," "applying the model pipeline to the received sensor data," "receiving a model output from the model pipeline via a model interface of the one model orchestrator," and "generating an insight based at least in part on the model output, the insight is smaller than the sensor data in data size" (emphasis added) as recited in claim 1 include limitations that cannot be practically performed in the human mind." Applicant further states "Applying the rule in MPEP § 2106.04(a)(2)(III)(A), claim 1 does not fall into the grouping of mental processes." Examiner respectfully disagrees. MPEP 2106.04(a)(2)(III)(A) states "A Claim With Limitation(s) That Cannot Practically be Performed in the Human Mind Does Not Recite a Mental Process. Claims do not recite a mental process when they do not contain limitations that can practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitations." The steps recited by Applicant includes a limitation that can be practically performed in the human mind, "generating an insight based at least in part on the model output, the insight is smaller than the sensor data in data size”. A human mind can practically generate an insight based on data, and therefore, the limitation falls under the abstract idea grouping of mental process. The other steps recited by Applicant were not identified as abstract ideas, rather additional elements, in the previous office action, and thus are not evaluated in Eligibility Step 2A, Prong One. With respect to claim 8, see pages 13-14, Applicant argues, "In this regard, Applicant respectfully submits that a human mind, with or without physical aid, is not equipped to deploy a first model orchestrator to a first computing device, where the first model orchestrator transmits the real-time sensor data to a second model orchestrator of the one or more model orchestrators via a second data interface, the second model orchestrator includes the second data interface and an indication of a second model pipeline, and the second model orchestrator is hosted by a second computing device different from the first computing device." This limitation is not identified as an abstract idea, rather an additional element and is not evaluated in Eligibility Step 2A, Prong One. See below 35 U.S.C. 101 rejection. Applicant further argues, see pages 10-11, "The recited limitations can facilitate significant improvements in efficiencies for model orchestration of machine learning models because, as explained in para. [0089] of the specification as filed, even as the data generated by sensors continue to increase in volume and complexity, the aforementioned method of orchestrating the models allows for efficient and consistent processing of noisy, high-scale data, reducing the amount of data that needs to be stored and transmitted, and/or enabling low-latency decision-making for near real-time actions, such as tasking a sensor to perform a job, as well as improving data processing and/or reducing the need to downlink data before taking action. As such, claim 1 as a whole integrates the alleged exception into a practical application of model orchestration of machine learning models. Therefore, amended claim 1 is not directed to an abstract idea even if one were to assume that claim 1 falls into the category of a mental process." Examiner respectfully disagrees. MPEP 2106.04(d)(1) states “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification (e.g., "thereby increasing the bandwidth of the channel").” The specific improvement in the specification cited by Applicant, [0089], states "In some examples, deploying AI at the edge of space allows for efficient and consistent processing of noisy, high-scale data (e.g., GEO INT (geospatial intelligence) data, RF (radio frequency) data), reducing the amount of data that needs to be stored and transmitted and/or enabling low-latency decision-making for near real-time actions, such as tasking a sensor to perform a job. In certain examples, AIP deployed directly onboard spacecraft accelerates model deployment and improvement, improves data processing, and/or reduces the need to downlink data before taking action." Both improvements cited in the specification require the model orchestrator to be deployed in space/onboard spacecraft. As the claims do not recite the model orchestrator being deployed in space, the claim does not include the components that provide the improvement described in the specification. Applicant provides a substantially similar argument for claim 8, see pages 14-15, which is also not persuasive for the above reason. Applicant further argues, see pages 11-12, that "By using the processes that encompasses the aforementioned features (1) through (5) as recited, claim 1 provides a particular technical solution to a technical problem associated with model orchestration of machine learning models by specifically implementing various model orchestrators hosted on separate computing devices in order to receive and process the sensor data in order to generate the insight that achieves the smaller data size than the sensor data for further processing. Such a technical solution is unconventional and also significantly more than a well-understood, routine, conventional activity in the field." Examiner respectfully disagrees. MPEP 2106.04(d)(1) states “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification (e.g., "thereby increasing the bandwidth of the channel").” In this section, there is no citation of the specification by the Applicant and Examiner has not found any support for the claimed improvement in the specification. Thus, the disclosure does not provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Applicant provides a substantially similar argument for claim 8, see pages 15-16, which is also not persuasive for the above reason. Applicant’s arguments, see page 7, filed 12/15/2025, with respect to the rejection(s) of claim(s) 1-7 and 15-20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast and Dandekar (“Towards Autonomic Orchestration of Machine Learning Pipelines in Future Networks”, July 2021). Applicant’s arguments, see page 7, filed 12/15/2025, with respect to the rejection(s) of claim(s) 8-14 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast, Sharma (US 2024/0305689 A1), and Dandekar (“Towards Autonomic Orchestration of Machine Learning Pipelines in Future Networks”, July 2021). Information Disclosure Statement The information disclosure statement filed 4/25/2023 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but the information referred to therein has not been considered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter. Claims 1-15 are directed to a process and claims 15-20 are directed to a machine. All claims are directed to statutory categories and analysis proceeds. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: generating an insight based at least in part on the model output, the insight is smaller than the sensor data in data size; (Generating an insight based on model output can be practically performed in the human mind. The human mind can produce an insight which is smaller than the sensor data in data size.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A method for using one or more model orchestrators, the method comprising: (A model orchestrator is a generic machine learning component. This is mere instructions to apply an exception.) receiving, via a first data interface of a first model orchestrator of the one or more model orchestrators, sensor data from a second model orchestrator having a second data interface, the first model orchestrator hosted by a first computing device, the second model orchestrator hosted by a second computing device including a sensor, each model orchestrator including an indication of a model pipeline, the model pipeline including a plurality of models, the first computing device being different from the second computing device; (Receiving data is an existing process. This amounts to mere instructions to apply an exception. See MPEP § 2106.05(f)(2). Specifying that it is received via a data interface is the insignificant extra-solution activity of selecting a particular data source. See MPEP § 2106.05, ‘Selecting a particular data source or type of data to be manipulated’, example iv.) executing, on the first computing device, at least one of the plurality of models according to the model pipeline; (The execution of a model is a generic machine learning process, which amounts to mere instructions to apply an exception.) applying the model pipeline to the received sensor data; (Applying a model pipeline is a generic machine learning process. This amounts to mere instructions to apply an exception.) receiving a model output from the model pipeline via a model interface of the one model orchestrator; (Receiving data is an existing process. This amounts to mere instructions to apply an exception. Specifying that it is received via a data interface is the insignificant extra-solution activity of selecting a particular data source. See MPEP § 2106.05, ‘Selecting a particular data source or type of data to be manipulated’, example iv.) wherein the method is performed using one or more processors. (This limitation recites generic computer components. This is mere instructions to apply an exception.) Regarding claim 2, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline includes a first model and a second model running in sequence, wherein a model output of the first model is an input to the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 3, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 4, the rejection of claim 3 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 5, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. (Data interfaces are a generic computer component, which are used for the existing process of receiving data. Specifying that the data is collected by edge devices is the insignificant extra-solution activity of selecting a particular data source. See MPEP § 2106.05, ‘Selecting a particular data source or type of data to be manipulated’, examples ii-iv.) Regarding claim 6, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: transmitting the insight to a computing device via an output interface of the one model orchestrator. (Data transmission is an existing process and interfaces are generic computing components, which amounts to mere instructions to apply an exception.) Regarding claim 7, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receiving an updated model orchestrator that is updated from the one model orchestrator based at least in part on the insight, the updated model orchestrator including an updated model pipeline; and (Receiving data is an existing process and amounts to mere instructions to apply an exception.) applying the updated model pipeline to the received sensor data. (Applying a model pipeline is a generic machine learning process. This amounts to mere instructions to apply an exception.) Regarding claim 8, the following are abstract ideas: selecting one or more models based at least in part on a data characteristic, a processing characteristic, or the historical data; (One could practically in the human mind, select a model based on data. This is a mental process.) developing a first model pipeline including the one or more models, a first model orchestrator of one or more model orchestrators including an indication of the model pipeline; (The interpretation of “developing a model pipeline” is “designing a model pipeline” which can be practically performed in the human mind. This is a mental process.) generating a data interface for the first model orchestrator to interface with real-time sensor data; (Generating a data interface, interpreted as designing a data interface, can be practically performed in the human mind. This is a mental process.) generating a model interface for the first model orchestrator to interface with the model pipeline; Generating a model interface, interpreted as designing a data interface, can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A method for managing one or more model orchestrators, the method comprising: (A model orchestrator is a generic machine learning component. This is mere instructions to apply an exception.) deploying the first model orchestrator to a first computing device, the first model orchestrator being configured to transmit the real-time sensor data to a second model orchestrator of the one or more model orchestrators via a second data interface, the second model orchestrator including the second data interface and an indication of a second model pipeline, the second model orchestrator being hosted by a second computing device different from the first computing device; (Deploying software and transmitting data are known processes in computing. This amounts to mere instructions to apply an exception. receiving historical data; (Receiving data is an existing process which amounts to mere instructions to apply an exception.) wherein the method is performed using one or more processors. (This limitation recites generic computer components. This is mere instructions to apply an exception.) Regarding claim 9, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline includes a first model and a second model running in sequence, wherein a model output of the first model is an input to the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 10, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 11, the rejection of claim 10 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. (This claim recites generic machine learning components and processes, which amounts to mere instructions to apply an exception. Additionally, organizing the pipeline is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d)(II), list 3, example vi.) Regarding claim 12, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. (Data interfaces are a generic computer component, which are used for the existing process of receiving data. Specifying that the data is collected by edge devices is the insignificant extra-solution activity of selecting a particular data source. See MPEP § 2106.05, ‘Selecting a particular data source or type of data to be manipulated’, examples ii-iv.) Regarding claim 13, the rejection of claim 1 is incorporated herein. The following is an abstract idea: generating an output interface for the one model orchestrator to interface a computing device. (Generating a model interface, interpreted as designing a data interface, can be practically performed in the human mind. This is a mental process.) Regarding claim 14, the rejection of claim 1 is incorporated herein. The following are abstract ideas: updating the one model orchestrator based at least in part on the one or more feedbacks. (Updating the model orchestrator, interpreted as updating a design of a model orchestrator, can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receiving one or more feedbacks regarding the one model orchestrator; and (This is the existing process of receiving data, which amounts to mere instructions to apply an exception.) Regarding claim 15, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A system for using one or more model orchestrators, the system comprising: (This limitation recites generic machine learning components. This is mere instructions to apply an exception.) one or more memories comprising instructions stored thereon; and (This limitation recites generic computer components. This is mere instructions to apply an exception.) one or more processors configured to execute the instructions and perform operations comprising: (This limitation recites generic computer components and processes. This is mere instructions to apply an exception.) The remainder of claim 15 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 16-20 recite substantially similar subject matter to claims 2-5 and 7 respectively and are rejected with the same rationale, mutatis mutandis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 2, 6, 7, 15, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast, and Dandekar (“Towards Autonomic Orchestration of Machine Learning Pipelines in Future Networks”, July 2021). Regarding claim 1, De Baynast teaches A method for using one or more model orchestrators, the method comprising: ([0004] states "A control system is described which has a communications interface receiving a live data steam of time stamped sensor data observed from a system to be controlled. The control system has an uploader configured to access a store of time-stamped sensor data from the live data stream; and a configuration manager configured to generate a plurality of pipeline configurations for analyzing the live data stream (or data retained from the live data stream). Each pipeline configuration comprises a plurality of components for analyzing data, an order of the components, and, if applicable, values of one or more parameters of each component." [0026] states "A non-exhaustive list of examples of components is: a moving average computation component, a component which computes a derivative of numerical values in a specified window of a time series, a component which detects seasonal features of a time series Such as an expected value of a variable per time of day, day of month features, a component which maintains a distribution of the time series values, a component which performs statistical tests of current readings against a distribution of the time series that has been maintained over time, a component comprising a signal processing filter such as a low-pass or high-pass filter, a regressor component, a linear predictor component, an auto-regressive model component, a classifier, a component for dimensionality reduction." [0028] states "The pipeline generator 100 is fully automated. It generates many possible pipelines using template and com ponent library 104 as well as rules, thresholds or constraints on parameter values of the components." The control system, pipeline generator, and pipeline are interpreted as the model orchestrator.) … each model orchestrator of the one or more model orchestrators including an indication of a model pipeline, the model pipeline including a plurality of models; ([0020] states "Control system 112 receives data from sensors 110 which may be at the email servers 114 or may be remote from the email servers 114." [0022] states "Data from sensors 110 is input to the data analytics pipeline at the data analytics nodes 120." Therefore, the data analytics pipeline receives the data from the control system, interpreted as part of the model orchestrator. [0021] states "In addition, or alternatively, the control system 112 receives instructions from alerting component 122 and/or control component 124 of a data analytics pipeline implemented in one or more data analytics nodes 120. The data analytics nodes are computational nodes which carry out computations specified by the components of the pipeline." These instructions are interpreted as the indication of a model pipeline. [0026] states "A non-exhaustive list of examples of components is: a moving average computation component, a component which computes a derivative of numerical values in a specified window of a time series, a component which detects seasonal features of a time series Such as an expected value of a variable per time of day, day of month features, a component which maintains a distribution of the time series values, a component which performs statistical tests of current readings against a distribution of the time series that has been maintained over time, a component comprising a signal processing filter such as a low-pass or high-pass filter, a regressor component, a linear predictor component, an auto-regressive model component, a classifier, a component for dimensionality reduction." Therefore, the components are interpreted as models, and the pipeline includes a plurality of models.) loading the plurality of models according to the model pipeline; ([0049] states "A description of the selected pipeline configuration may be stored. The description comprises enough detail to enable operationalization of the selected pipeline. For example, the description has references to software in the template and component library 104 for implementing components in the specified order." [0050] states "To operationalize the selected pipeline, commands are sent 714 from the pipeline generator 100 to the data analytics nodes 120. For example, the commands instruct the data analytics nodes to instantiate the software referenced in the description of the pipeline configuration at the data analytics node." As the software that implements the components (which are models) of the pipeline is instantiated, the models are loaded according to the model pipeline.) applying the model pipeline to the received sensor data; ([0022] states "Data from sensors 110 is input to the data analytics pipeline at the data analytics nodes 120.") receiving a model output from the model pipeline … ([0024] states "The output of the pipeline comprises an output stream of higher level numerical or categorical values computed from the input data stream." [0024] further states "The output stream is used by a control component 124 to generate instructions to send to control system 112 to control the email servers" As the output stream is used, it must have been received.) generating an insight based at least in part on the model output, the insight is smaller than the sensor data in data size; ([0024] states "The output of the pipeline comprises an output stream of higher level numerical or categorical values computed from the input data stream." The higher level numerical or categorical values are interpreted as the insight. [0026] states "A component is a data processing component for use in a data analytics pipeline which computes one or more features of time stamped data. A component may be param eterized, in that it takes as input values of one or more param eters. For example, a window size, whether to take samples at random or in a specified manner, which type of average to compute, or other parameters. A non-exhaustive list of examples of components is: a moving average computation component, a component which computes a derivative of numerical values in a specified window of a time series, a component which detects seasonal features of a time series such as an expected value of a variable per time of day, day of month features, a component which maintains a distribution of the time series values, a component which performs sta tistical tests of current readings against a distribution of the time series that has been maintained over time, a component comprising a signal processing filter such as a low-pass or high-pass filter, a regressor component, a linear predictor component, an auto-regressive model component, a classifier, a component for dimensionality reduction." A moving average calculation would result in an insight being smaller than the sensor data in data size, as less values will be computed than sensor data available. For the component which detects seasonal features of a time series such as an expected value of a variable per time of day, the insight would also be smaller than the sensor data in size, as it is predicting one value versus the sensor data comprising time series data. The component for dimensionality reduction would also result in an insight smaller than the sensor data.) wherein the method is performed using one or more processors. ([0108] states "The methods described herein may be performed by Software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices comprising computer-readable media Such as disks, thumb drives, memory etc and do not include propagated signals. Propagated signals may be present in a tangible storage media, but propagated signals per se are not examples of tangible storage media. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.") De Baynast does not appear to explicitly teach receiving, via a first data interface of a first model orchestrator of the one or more model orchestrators, sensor data from a second model orchestrator having a second data interface, the first model orchestrator hosted by a first computing device, the second model orchestrator hosted by a second computing device including a sensor, … the first computing device being different from the second computing device; [receiving data] via a data interface of [a model orchestrator] [receiving output] via a model interface of the one model orchestrator; However, Dandekar—directed to analogous art—teaches receiving, via a first data interface of a first model orchestrator of the one or more model orchestrators, sensor data from a second model orchestrator having a second data interface, the first model orchestrator hosted by a first computing device, the second model orchestrator hosted by a second computing device including a sensor, … the first computing device being different from the second computing device; (Page 2 states "The ML function orchestration is carried out by Machine Learning Function Orchestrator (MLFO) in conjunction with the OAM orchestrator. MLFO has four major functionalities, (1) Intent parsing, (2) Lifecycle management of ML pipelines, (3) Management of data and configuration of data sources, (4) Management of ML models which may include model selection, training, deployment etc." Page 2 also states "ML Pipeline subsystem: An ML application can logically be viewed as a chain of logical nodes i.e. a pipeline. This pipeline consists of data source, collector, preprocessor, model, policy and data sink. When the MLFO receives an intent it deploys a corresponding ML pipeline in the network infrastructure." Page 4 states "As seen in Figure 3, we assign one MLFO each for OSS, edge and smart factory domains. These MLFOs communicate using two interfaces, an intent based interface and a monitoring interface. Figure 4 shows the sequence diagram for ML pipeline orchestration workflow." The system that contains the OSS MLFO in the OSS domain is interpreted as the first model orchestrator and the system that contains the Factory MLFO in the Factory domain is interpreted as the second model orchestrator. As seen in Fig. 4, each MLFO (model orchestrator) has an intent based interface and a monitoring interface. Pages 4-5 state "In response to this the OSS MLFO deploys a ML pipeline to predict if there will be any QoS deterioration in the future.” Page 5 states "The OSS using its ML pipeline is able to predict that QoS experienced by the private users in the smart factory edge is going to deteriorate due to increase in number of public users." The sensor that detects number of public users in the factory is interpreted as the sensor. Therefore, the sensor data is received (number of public users) from the second model orchestrator. As the MLFOs are assigned in different domains, the model orchestrators are hosted by separate, different, computing devices.) [receiving output] via a model interface of the one model orchestrator; (Page 4 states "This interface is used for collecting performance data of ML pipelines from lower level MLFO. The performance data might include statistics about model accuracy, resource utilisation, available resources etc. Monitoring interface may also be used to send event based asynchronous updates." The monitoring interface is interpreted as the model interface.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast with the teachings of Dandekar because, as Dandekar states on page 1, "Integration of any ML based solutions in mobile networks requires a mechanism which can deploy, update and tear down ML pipelines. This mechanism is called ML pipeline orchestration. In order to deploy solutions proposed in the AI/ML in 5G challenge – 􀏐irstly, it is necessary for the ML pipeline orchestration to be autonomic, this means that ML pipelines should be able to orchestrate themselves. Secondly, multi‐domain ML pipeline orchestration is required as the ML based solution might also involve multiple pipelines across different network or operator domains." Regarding claim 2, the rejection of claim 1 is incorporated herein. De Baynast teaches wherein the model pipeline includes a first model and a second model running in sequence, wherein a model output of the first model is an input to the second model. ([0019] states "A data analytics pipeline is one or more data processing components connected together. In some examples, the components are connected in series so that output of a component earlier in the pipeline is used as input of an immediately subsequent component of the pipeline.") Regarding claim 6, the rejection of claim 1 is incorporated herein. De Baynast teaches transmitting the insight to a computing device via an output interface of the one model orchestrator. ([0050] states "The pipeline generator may optionally send commands to the alerting 122 and control 124 components to instruct those components how to use the output of the pipeline, according to the pipeline configuration description." As there are commands, there must be an output interface. [0051] states "The outputs of the pipeline are received 720 at the alerting and/or control components and are used to control the email servers 114 or other entities." One of ordinary skill in the art would realize that the alerting and/or control components are implemented by a computing device.) Regarding claim 7, the rejection of claim 1 is incorporated herein. De Baynast teaches receiving an updated model orchestrator that is updated from the one model orchestrator based at least in part on the insight, the updated model orchestrator including an updated model pipeline; and ([0033] states "This method may occur after the method of FIG. 2 for example. In the method of FIG. 2 the selected pipeline is operationalized." [0033] further states "The sensors 110 sense more data from the email servers 114 and data retention component 108 takes a new sample 304 of the sensor data and stores that in data store 106. The process then returns to box 202 of FIG. 2 to search, evaluate, select and operationalize the pipeline." Therefore, the pipeline is updated. In regards to the process of searching and evaluating pipelines, [0047] states "Ground truth input is optionally received 708 from a user and the pipeline configurations are optionally re-ranked 710 by executing the pipeline configurations on the ground truth data. A ranking may be computed using evaluation measures that either take ground truth into account or not." Therefore, the output of the pipeline, interpreted as the insight, will be used to select an updated pipeline, which updates the model orchestrator. [0049] states "A description of the selected pipeline configuration may be stored. The description comprises enough detail to enable operationalization of the selected pipeline." [0050] states "To operationalize the selected pipeline, commands are sent 714 from the pipeline generator 100 to the data analytics nodes 120. For example, the commands instruct the data analytics nodes to instantiate the software referenced in the description of the pipeline configuration at the data analytics nodes." Therefore, the model orchestrator is received by the data analytics nodes.) applying the updated model pipeline to the received sensor data. ([0051] states "The live data stream is received 716 at the operationalized pipeline and is processed by the analytics nodes 718 which have the instantiated software.") Regarding claim 15, De Baynast teaches A system for using one or more model orchestrators, the system comprising: ([0004] states "A control system is described which has a communications interface receiving a live data steam of time stamped sensor data observed from a system to be controlled. The control system has an uploader configured to access a store of time-stamped sensor data from the live data stream; and a configuration manager configured to generate a plurality of pipeline configurations for analyzing the live data stream (or data retained from the live data stream). Each pipeline configuration comprises a plurality of components for analyzing data, an order of the components, and, if applicable, values of one or more parameters of each component." [0026] states "A non-exhaustive list of examples of components is: a moving average computation component, a component which computes a derivative of numerical values in a specified window of a time series, a component which detects seasonal features of a time series Such as an expected value of a variable per time of day, day of month features, a component which maintains a distribution of the time series values, a component which performs statistical tests of current readings against a distribution of the time series that has been maintained over time, a component comprising a signal processing filter such as a low-pass or high-pass filter, a regressor component, a linear predictor component, an auto-regressive model component, a classifier, a component for dimensionality reduction." [0028] states "The pipeline generator 100 is fully automated. It generates many possible pipelines using template and com ponent library 104 as well as rules, thresholds or constraints on parameter values of the components." The control system, pipeline generator, and pipeline are interpreted as the model orchestrator.) one or more memories comprising instructions stored thereon; and ([0108] states "([0108] states "The methods described herein may be performed by Software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices comprising computer-readable media Such as disks, thumb drives, memory etc and do not include propagated signals. Propagated signals may be present in a tangible storage media, but propagated signals per se are not examples of tangible storage media.") one or more processors configured to execute the instructions and perform operations comprising: ([0108] states "The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.") The remainder of claim 15 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 16 and 20 recite substantially similar subject matter to claims 2 and 7 respectively and are rejected with the same rationale, mutatis mutandis. Claim(s) 8, 9, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast, Sharma (US 2024/0305689 A1), and Dandekar (“Towards Autonomic Orchestration of Machine Learning Pipelines in Future Networks”, July 2021). Regarding claim 8, De Baynast teaches A method for managing one or more model orchestrators, the method comprising: ([0004] states "A control system is described which has a communications interface receiving a live data steam of time stamped sensor data observed from a system to be controlled. The control system has an uploader configured to access a store of time-stamped sensor data from the live data stream; and a configuration manager configured to generate a plurality of pipeline configurations for analyzing the live data stream (or data retained from the live data stream). Each pipeline con iguration comprises a plurality of components for analyzing data, an order of the components, and, if applicable, values of one or more parameters of each component." [0026] states "A non-exhaustive list of examples of components is: a moving average computation component, a component which computes a derivative of numerical values in a specified window of a time series, a component which detects seasonal features of a time series Such as an expected value of a variable per time of day, day of month features, a component which maintains a distribution of the time series values, a component which performs statistical tests of current readings against a distribution of the time series that has been maintained over time, a component comprising a signal processing filter such as a low-pass or high-pass filter, a regressor component, a linear predictor component, an auto-regressive model component, a classifier, a component for dimensionality reduction." [0028] states "The pipeline generator 100 is fully automated. It generates many possible pipelines using template and com ponent library 104 as well as rules, thresholds or constraints on parameter values of the components." The control system and pipeline generator are interpreted as the model orchestrator.) receiving historical data; ([0038] states "The time series visualizer takes input from an uploader 532 of the data layer 512 comprising historical data 502 (such as from data store 106 of FIG. 1).") selecting one or more models based at least in part on a data characteristic, a processing characteristic, or the historical data; ([0041] states "The configuration manager 528 accesses the tem plate and component library (104 of FIG. 1) and selects a template to be used. With the selected template the configuration manager generates potential pipeline configurations, taking into account any pre-specified constraints given in the template, or from another store. For example, constraints on ranges of values which may be input to specified components, constraints on the order in which components may be connected together, constraints on types of values which may be input or output from specified components. As mentioned above, a component may be parameterized. The configuration manager also controls what parameter ranges of the component parameters are to be used in the potential pipeline configurations. The configuration manager feeds the configurations it generates to the ranker." The “constraints on types of values which may be input or output from specified components" means that the configuration and therefore the models are selected based on a data characteristic. The "constraints on the order in which components may be connected together" means that the configuration and therefore the models is selected based on a processing characteristic. [0047] states "Once the potential pipeline configurations are created, these are executed 704 using the data in data store 106 to obtain evaluation results. Optionally the pipeline configurations are ranked 706 on the basis of the evaluation results." [0048] states "At least one of the pipeline configurations is selected 712. For example, by taking a highest ranked pipe line configuration. Or by manual selection by the user." The data in data store 106, as explained in regards to the previous limitation, is historical data. Therefore, the configuration and therefore the models is selected based on the historical data.) developing a first model pipeline including the one or more models, a first model orchestrator of one or more model orchestrators including an indication of the model pipeline; ([0050] states "To operationalize the selected pipeline, commands are sent 714 from the pipeline generator 100 to the data analytics nodes 120. For example, the commands instruct the data analytics nodes to instantiate the software referenced in the description of the pipeline configuration at the data analytics nodes.") wherein the method is performed using one or more processors. ([0108] states "The methods described herein may be performed by Software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices comprising computer-readable media Such as disks, thumb drives, memory etc and do not include propagated signals. Propagated signals may be present in a tangible storage media, but propagated signals per se are not examples of tangible storage media. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.") De Baynast does not appear to explicitly teach generate a data interface for the one model orchestrator to interface with real-time sensor data; generate a model interface for the one model orchestrator to interface with the model pipeline; deploying the first model orchestrator to a first computing device, the first model orchestrator being configured to transmit the real-time sensor data to a second model orchestrator of the one or more model orchestrators via a second data interface, the second model orchestrator including the second data interface and an indication of a second model pipeline, the second model orchestrator being hosted by a second computing device different from the first computing device. However, Sharma—directed to analogous art—teaches generate a data interface for the one model orchestrator to interface with real-time sensor data; ([0071] states "The data ingestion component 421 receives multiple streams of data on multiple network interfaces at the edge layer and preferably recognizes and accepts sensor and other IoT data in accordance with various established data ingestion protocols (e.g. OPC-UA, Modbus, MQTT, DDS, and others) as well as other suitable data transfer protocols." Therefore, the data ingestion component is a data interface. [0159] states "In general, the example software edge platform described herein is designed to be capable to perform machine-learning workflows that span the local compute resources available at the edges of sensor networks and the resources available in remote data centers or "cloud" sites. The software edge platform processes continuous streams of raw sensor data and aggregates the processed data at the edge. The processing is performed under programmatic control through API's by the developers of machine learning analyses to preprocess the data as specified for use in the machine learning analyses. Machine learning models are constructed from the machine learning analyses and can then be deployed to the edge platform for execution on live sensor data." Therefore, the system is a model orchestrator. As the data interface is used, it must have been generated.) generate a model interface for the one model orchestrator to interface with the model pipeline; ([0266] states "The model outputs are published on the data bus 532 (FIG. 5) of the edge platform and may be accessed and used in applications and analytics expressions comprising components of the same workflow of which the model is a part as well as applications and expressions comprising components of other workflows. The model outputs also may be stored and aggregated on the edge platform or transferred to the cloud, or both, as previously described." Therefore, the data bus is an interface of the model orchestrator, which produces output for the other applications to receive. As the model interface is used, it must have been generated.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast with the teachings of Sharma because, as Sharma states in [0009] "However, implementing machine learning at the edge where locally-generated data can be received and acted on directly, in real-time, with local context, and without awaiting transmission to a remote cloud site would allow actionable insights to be derived from the locally-generated data and made available for use locally substantially in real-time." The combination of De Baynast and Sharma does not appear to explicitly teach deploying the first model orchestrator to a first computing device, the first model orchestrator being configured to transmit the real-time sensor data to a second model orchestrator of the one or more model orchestrators via a second data interface, the second model orchestrator including the second data interface and an indication of a second model pipeline, the second model orchestrator being hosted by a second computing device different from the first computing device. However, Dandekar—directed to analogous art—teaches deploying the first model orchestrator to a first computing device, the first model orchestrator being configured to transmit the real-time sensor data to a second model orchestrator of the one or more model orchestrators via a second data interface, the second model orchestrator including the second data interface and an indication of a second model pipeline, the second model orchestrator being hosted by a second computing device different from the first computing device. (Page 2 states "The ML function orchestration is carried out by Machine Learning Function Orchestrator (MLFO) in conjunction with the OAM orchestrator. MLFO has four major functionalities, (1) Intent parsing, (2) Lifecycle management of ML pipelines, (3) Management of data and configuration of data sources, (4) Management of ML models which may include model selection, training, deployment etc." Page 2 also states "ML Pipeline subsystem: An ML application can logically be viewed as a chain of logical nodes i.e. a pipeline. This pipeline consists of data source, collector, preprocessor, model, policy and data sink. When the MLFO receives an intent it deploys a corresponding ML pipeline in the network infrastructure." Page 4 states "As seen in Figure 3, we assign one MLFO each for OSS, edge and smart factory domains. These MLFOs communicate using two interfaces, an intent based interface and a monitoring interface. Figure 4 shows the sequence diagram for ML pipeline orchestration workflow." The system that contains the OSS MLFO in the OSS domain is interpreted as the first model orchestrator and the system that contains the Factory MLFO in the Factory domain is interpreted as the second model orchestrator. As they are used in each domain, they must have been deployed. As seen in Fig. 4, each MLFO (model orchestrator) has an intent based interface and a monitoring interface. Pages 4-5 state "In response to this the OSS MLFO deploys a ML pipeline to predict if there will be any QoS deterioration in the future.” Page 5 states "The OSS using its ML pipeline is able to predict that QoS experienced by the private users in the smart factory edge is going to deteriorate due to increase in number of public users." The sensor that detects number of public users in the factory is interpreted as the sensor. Therefore, the sensor data is transmitted (number of public users) to the second model orchestrator. As the MLFOs are assigned in different domains, the model orchestrators are hosted by separate, different, computing devices.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast with the teachings of Dandekar because, as Dandekar states on page 1, "Integration of any ML based solutions in mobile networks requires a mechanism which can deploy, update and tear down ML pipelines. This mechanism is called ML pipeline orchestration. In order to deploy solutions proposed in the AI/ML in 5G challenge – 􀏐irstly, it is necessary for the ML pipeline orchestration to be autonomic, this means that ML pipelines should be able to orchestrate themselves. Secondly, multi‐domain ML pipeline orchestration is required as the ML based solution might also involve multiple pipelines across different network or operator domains." Regarding claim 9, the rejection of claim 8 is incorporated herein. De Baynast teaches wherein the model pipeline includes a first model and a second model running in sequence, wherein a model output of the first model is an input to the second model. ([0019] states "A data analytics pipeline is one or more data processing components connected together. In some examples, the components are connected in series so that output of a component earlier in the pipeline is used as input of an immediately subsequent component of the pipeline.") Regarding claim 13, the rejection of claim 1 is incorporated herein. De Baynast teaches generating an output interface for the one model orchestrator to interface a computing device. ([0050] states "The pipeline generator may optionally send commands to the alerting 122 and control 124 components to instruct those components how to use the output of the pipeline, according to the pipeline configuration description." As there are commands, there must be an output interface. [0051] states "The outputs of the pipeline are received 720 at the alerting and/or control components and are used to control the email servers 114 or other entities." One of ordinary skill in the art would realize that the alerting and/or control components are implemented by a computing device. As the interface is used, it must have been generated.) Regarding claim 14, the rejection of claim 1 is incorporated herein. De Baynast teaches receiving one or more feedbacks regarding the one model orchestrator; and ([0038] states "Users 500 interact with the pipeline generator via the presentation layer 508 which comprises various visualization components including a time series visualizer 514, a results visualizer 516, a health metric visualizer 518 and a ground truth selector 520. The time series visualizer takes input from an uploader 532 of the data layer 512 comprising historical data 502 (such as from data store 106 of FIG. 1)." [0048] states "At least one of the pipeline configurations is selected 712. For example, by taking a highest ranked pipe line configuration. Or by manual selection by the user." This is interpreted as the feedback.) updating the one model orchestrator based at least in part on the one or more feedbacks. ([0033] states "This method may occur after the method of FIG. 2 for example. In the method of FIG. 2 the selected pipeline is operationalized." [0033] further states "The sensors 110 sense more data from the email servers 114 and data retention component 108 takes a new sample 304 of the sensor data and stores that in data store 106. The process then returns to box 202 of FIG. 2 to search, evaluate, select and operationalize the pipeline." Therefore, the pipeline is updated at least in part on the one or more feedbacks, as the method of choosing a pipeline is redone.) Claim(s) 3-5 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast, and Sharma (US 2024/0305689 A1) as applied to claim 1 above, and further in view of Munir (“Artificial Intelligence and Data Fusion at the Edge”, July 2021). Regarding claim 3, the rejection of claim 1 is incorporated herein. The combination of De Baynast and Dandekar does not appear to explicitly teach wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. However, Munir—directed to analogous art—teaches wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. (Page 3, Figure 1 shows the sensor data which is an input to the IoT nodes. The IoT nodes do AI/ML processing, which at least involves one model for each IoT node (see page 9, table 3), and are therefore interpreted as the first and second models. As they are independent, they run in parallel, as one of ordinary skill in the art would understand.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast and Dandekar with the teachings of Munir because, as Munir states on page 3, "It is noted that for many applications, much of the collected data is time-sensitive and become useless if not utilized timely. Hence, solutions such as data fusion are of paramount significance to enhance the effectiveness and usage of sensed data in a timely manner. Data fusion is defined as the process of combining data from multiple sources to produce more accurate, consistent, and concise information than that provided by any individual data source." Regarding claim 4, the rejection of claim 3 is incorporated herein. The combination of De Baynast and Dandekar does not appear to explicitly teach wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. However, Munir—directed to analogous art—teaches wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. (Figure 1 shows that the outputs of the AI/ML processing go to a data fusion section in the edge server, which is the input to the AI/ML processing of the edge server, which involves at least one model (see page 9, table 3), which is interpreted as the third model. Though they go through data fusion, the outputs of the first and second models are still received by the third model.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast and Dandekar with the teachings of Munir for the reasons given above in regards to claim 3. Regarding claim 5, the rejection of claim 1 is incorporated herein. The combination of De Baynast and Dandekar does not appear to explicitly teach wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. However, Munir—directed to analogous art—teaches wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. (Page 3 states "The edge servers in our framework are connected to the top tier centralized cloud server layer through the core network. The core network consigns locally processed data and information from the edge to the cloud for various purposes such as analytics, archival, and decision-making at a broader scale." Therefore, the edge server is an interface for communication between the cloud component and the IoT node component. As can be seen in Fig. 1, there are multiple edge servers, interpreted as the first and second data interfaces. Page 3 states "Each edge server manages a cluster of edge-of-network sensors/IoT devices in its vicinity. Edge servers provide applications, content, context, services, and storage to edge-of-network IoT devices." Therefore, each interface receives sensor data from IoT nodes, interpreted as the edge device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast and Dandekar with the teachings of Munir for the reasons given above in regards to claim 3. Claims 17-19 recite substantially similar subject matter to claims 3-5 respectively and are rejected with the same rationale, mutatis mutandis. Claim(s) 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Baynast De Septfontaines et al. (US 2016/0179063 A1), hereinafter De Baynast, Sharma (US 2024/0305689 A1), and Dandekar (“Towards Autonomic Orchestration of Machine Learning Pipelines in Future Networks”, July 2021) as applied to claim 1 above, and further in view of Munir (“Artificial Intelligence and Data Fusion at the Edge”, July 2021). Regarding claim 10, the rejection of claim 8 is incorporated herein. The combination of De Baynast, Sharma, and Dandekar does not appear to explicitly teach wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. However, Munir—directed to analogous art—teaches wherein the model pipeline includes a first model and a second model running in parallel, wherein the sensor data is an input to the first model and an input to the second model. (Page 3, Figure 1 shows the sensor data which is an input to the IoT nodes. The IoT nodes do AI/ML processing, which at least involves one model for each IoT node (see page 9, table 3), and are therefore interpreted as the first and second models. As they are independent, they run in parallel, as one of ordinary skill in the art would understand.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast, Sharma, and Dandekar with the teachings of Munir because, as Munir states on page 3, "It is noted that for many applications, much of the collected data is time-sensitive and become useless if not utilized timely. Hence, solutions such as data fusion are of paramount significance to enhance the effectiveness and usage of sensed data in a timely manner. Data fusion is defined as the process of combining data from multiple sources to produce more accurate, consistent, and concise information than that provided by any individual data source." Regarding claim 11, the rejection of claim 10 is incorporated herein. The combination of De Baynast, Sharma, and Dandekar does not appear to explicitly teach wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. However, Munir—directed to analogous art—teaches wherein the model pipeline further includes a third model receiving a model output of the first model and a model output of the second model. (Figure 1 shows that the outputs of the AI/ML processing go to a data fusion section in the edge server, which is the input to the AI/ML processing of the edge server, which involves at least one model (see page 9, table 3), which is interpreted as the third model. Though they go through data fusion, the outputs of the first and second models are still received by the third model.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast, Sharma, and Dandekar with the teachings of Munir for the reasons given above in regards to claim 10. Regarding claim 12, the rejection of claim 8 is incorporated herein. The combination of De Baynast, Sharma, and Dandekar does not appear to explicitly teach wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. However, Munir—directed to analogous art—teaches wherein the data interface includes a first data interface for receiving first sensor data collected by a first edge device and a second data interface for receiving second sensor data collected by a second edge device. (Page 3 states "The edge servers in our framework are connected to the top tier centralized cloud server layer through the core network. The core network consigns locally processed data and information from the edge to the cloud for various purposes such as analytics, archival, and decision-making at a broader scale." Therefore, the edge server is an interface for communication between the cloud component and the IoT node component. As can be seen in Fig. 1, there are multiple edge servers, interpreted as the first and second data interfaces. Page 3 states "Each edge server manages a cluster of edge-of-network sensors/IoT devices in its vicinity. Edge servers provide applications, content, context, services, and storage to edge-of-network IoT devices." Therefore, each interface receives sensor data from IoT nodes, interpreted as the edge device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De Baynast, Sharma, and Dandekar with the teachings of Munir for the reasons given above in regards to claim 10. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA THUY PHAM whose telephone number is (571)272-2605. The examiner can normally be reached Monday - Friday, 9 A.M. - 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.P./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Aug 10, 2022
Application Filed
Aug 08, 2025
Non-Final Rejection — §101, §103
Nov 10, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Dec 15, 2025
Response Filed
Feb 09, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month