Prosecution Insights
Last updated: April 18, 2026
Application No. 18/619,341

Asynchronous Machine Learning Model Execution

Final Rejection §103
Filed
Mar 28, 2024
Examiner
AFSHAR, KAMRAN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Aptiv Technologies AG
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
183 granted / 274 resolved
+11.8% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 274 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the amendment filed 9/18/2025. In the amendment claims 1, 4, 7, 10, 12 and 13 were amended, and no claims were added or cancelled. Thus, claims 1-15 are pending and have been examined. Claims 1-15 are rejected. This action is made final. Examiner’s Remarks Claim 13 recites: “an ML controller configured to: receive…”. Based on the disclosure, the recited “controller”, is interchangeable with the terms “module” or “circuit”, which according to Applicant’s specification: “may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit… such as in a system-on-chip”, see, e.g., specification, paragraph [0091]. Given the ample structure provided to support the “controller” being a type of hardware or circuitry, the phrase ‘configured to’ is interpreted as merely describing the intended purpose or capability of this recited structure. Accordingly, the claim is not interpreted under U.S.C 112(f). Claim Objections Claims 1-15 are objected to because of the following informalities: Independent Claims 1, 12 and 13 recite: "subsequent to the starting processing of the second set " [sic]. This phrasing is grammatically incorrect and unclear, Examiner suggests amending the phrase to read "subsequent to "before conclusion of the processing the second set" [sic] . This phrasing is grammatically incorrect and unclear, Examiner suggests amending the phrase to read "before conclusion of of the second set", or otherwise clarified. "wherein the processing the second set" [sic]. This phrasing is grammatically incorrect and unclear, Examiner suggests amending the phrase to read "wherein of the second set", or otherwise clarified. "from the post-processing the output data" [sic]. This phrasing is grammatically incorrect and unclear, Examiner suggests amending the phrase to read “from of the output data”, or otherwise clarified. Also claims 2-11, and 14-15, which depend either directly or indirectly from independent claims 1 and 13, respectively, are also objected to under the same rationale as independent claims 1 and 13. Appropriate correction is required. Response to Amendment Applicant’s amendment filed 9/18/2025 has been entered. In the amendment, independent claims 1, 12 and 13, and dependent claims 4, 7 and 10, were amended. No claims were cancelled or added. As such, claims 1-15 are pending. The objections to the drawings and specification set forth in the previous office action are withdrawn in view of the amendments to the drawings and specification. The rejections of claims 4-7 and 10-11 under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter, set forth in the previous office action, are withdrawn in view of the amendments to claims 4, 7 and 10. The previous rejections of claims 1-15 under 35 U.S.C. 101 are withdrawn in view of the amendments to the claims and Applicant’s remarks. Response to Arguments Applicant’s remarks filed 9/18/2025 with respect to the rejection of claims 1-15 under U.S.C 103 in the previous office action have been considered and are persuasive in part. However, as detailed below, in light of the amendments to independent claims 1, 12 and 13, and dependent claims 4, 7 and 10, these claims are now rejected under a new arrangement of the Brueckner and Kirsche references. Specifically, Brueckner paragraphs [0079], [0082], [0112] and [0133] are now used to reject the newly-recited elements of the claims. In particular, Applicant’s remark that the previous “Office Action acknowledges that Brueckner fails to disclose starting processing of a second set of queued input data independent from post-processing output data resulting from processing a first set of the queued input data and raises Kirsche” and that “Kirsche appears to be silent on the added features of amended claim 1” (see, Applicant’s Remarks p. 12), is acknowledged. Necessitated by Applicant’s amendments to the claims, the Examiner has revised the rejection to accurately apply the relevant portions of the cited art to the newly-recited limitations. Upon further review, and under the broadest reasonable interpretation, that Brueckner teaches the previously-cited limitation regarding independent processing of queued input data from post-processing output data, or an obvious variation thereof. In addition to the previously discussed features, Brueckner also teaches the newly added limitations of the independent claims, as set forth below. Furthermore amended claims 4, 7 and 10 are not substantially changed in scope, but in light of the amendments to the claims, have been re-interpreted under the same reference (Brueckner) to align significantly to the amended claim limitations. Therefore, As detailed below, claims 1-10 and 12-15 remain rejected under 35 U.S.C. 103 over Brueckner in view of Kirsche, and claim 11 remains rejected under section 103 over Brueckner in view of Kirsche and further in view of Sharma. Applicant's arguments filed 9/18/2025 with respect to the rejections of claims 1-15 under 35 U.S.C. 101 have been fully considered and are persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Brueckner (US 20160078361 A1; hereinafter Brueckner) in view of Kirsche (US 11182695 B1; hereinafter Kirsche). Regarding Independent Claim 1, Brueckner teaches A method for processing of data with machine learning (ML) models, the method comprising (see, e.g., Brueckner paragraphs [0056]: “The MLS programmatic interfaces may enable users to submit respective requests for several related tasks of a given machine learning workflow, such as tasks for extracting records from data sources, generating statistics on the records, feature processing, model training, prediction, and so on” and [0059]: “In some embodiments, some machine learning models may be created and trained, e.g., by a group of model developers or data scientists using the MLS APIs”): receiving, at an ML controller from at least one ML application, at least one request to run at least one ML model (see, e.g., Brueckner paragraphs [0056-0057]: “the MLS may take care of ensuring that a given task is scheduled for execution only when its dependencies (if any dependencies exist) have been met…The MLS may be responsible for ensuring that the dependencies of a given job have been met before the corresponding operations are initiated. The MLS may also be responsible in such embodiments for generating a processing plan for each job, identifying the appropriate set of resources (e.g., CPUs/cores, storage or memory) for the plan, scheduling the execution of the plan, gathering results…”, [0174]: “FIG. 32 is a flow diagram illustrating aspects of operations that may be performed at a machine learning service in response to a request for training and evaluation iterations of a machine learning model, according to at least some embodiments. As shown in element 3201, a request to perform one or more TEIs (training-and-evaluation iterations, such as cross-validation iterations) may be received via a programmatic interface such as an MLS I/O library API” [i.e., the MLS (ML controller) receives requests to run training on ML models]); queueing, at the ML controller, pre-processed input data for the at least one ML model (see, e.g., Brueckner paragraph [0110]: “A run-time recipe manager 1110 of the MLS may be responsible for the scheduling of recipe executions in some embodiments… Depending on the details of the recipe R1, the outputs 1185A may represent either data that is to be used as input for a model, or a result of a model (such as a prediction or evaluation). In at least some embodiments, a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later”); starting processing of a second set of queued input data with the at least one ML model (see, e.g., Brueckner paragraph [0090]: “the MLS may support recurring scheduling of related jobs. For example, a client may create an artifact such as a model, and may want that same model to be re-trained and/or re-executed for different input data sets (e.g., using the same configuration of resources for each of the training or prediction iterations) at specified points in time… A respective job may be placed in the MLS job queue for each recurring training or execution iteration” [i.e., the different input data sets imply a second input data set, starting based on the job queue]), and subsequent to the starting processing of the second set but before conclusion of the processing the second set (see, e.g., Brueckner paragraph [0133]: “In some embodiments the machine learning model may support parallelized training of models, in which for example respective (and potentially partially overlapping) subsets of an input data set may be used to train a given model in parallel. The duration of one training operation may overlap at least partly with the duration of another in such a scenario, and the input data set may be partitioned for the parallel training sessions using a chunk-level filtering operation” and paragraph [0146]: “An MLS request handler 180 may receive a record extraction request 2310 indicating a sequence of filtering operations that are to be performed on a specified data set located at one or more data sources, such as some combination of shuffling, splitting, sampling, partitioning (e.g., for parallel computations such as map-reduce computations, or for model training operations/sessions that overlap with each other in time and may overlap with each other in the training sets used), and the like” [i.e., the overlapping operations in time of multiple subsets of input data functions as processing a second set subsequent to the starting processing of the set and before the conclusion of said processing starting another processing step]), starting post-processing output data resulting from processing a first set of the queued input data (see, e.g., Brueckner paragraph [0110]: “In the depicted embodiment, two execution requests 1171A and 1171B for the same recipe R1 are shown, with respective input data sets IDS1 and IDS2…a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later. The execution of a recipe may be dependent on other jobs in some cases—e.g., upon the completion of jobs associated with input record handling (decryption, decompression, splitting of the data set into training and test sets, etc.)” [i.e., execution of a recipe (starting post-processing) is dependent on another job (processing a first set of a queued input) functioning as a post-processing resulting from a prior queued processing of input] and paragraph [0112]: “FIG. 12 illustrates example sections of a recipe, according to at least some embodiments. In the depicted embodiment, the text of a recipe 1200 may comprise four separate sections—a group definitions section 1201, an assignments section 1204, a dependencies section 1207, and an output/destination section 1210. In some implementations, only the output/destination section may be mandatory” [i.e., the processed recipe executions result in output data]) so that an overall time required to process and post-process the first and second sets is reduced (see, e.g., Brueckner paragraph [0079]: “In the other type of dependency, the execution of one job Jp may be started as soon as some specified phase of another job Jq is completed. This latter type of dependency may be termed a “partial dependency”, and is indicated in FIG. 5 by the “dependsOnPartial” parameter” [i.e., the dependsOnPartial parameter enables starting a job as soon as a phase of another job is complete, reducing the overall time execute queued recipe executions (that process and post-process first and second data sets)] and paragraph [0082]: “As indicated by J3's ‘dependsOnPartial’ parameter value, J3 can be started when a specified phase or subset of J2's work is complete in the depicted example. The portion of J2 upon which J3 depends completes at time t4 in the illustrated example, and the execution of J3 therefore begins (in parallel with the execution of the remaining portion of J2) at t4” [i.e., the MLS job scheduler optimizes the amount of recipes are executed and executes jobs in an overlapping manner]), wherein the processing the second set is independent1 from the post-processing the output data (see, e.g., Brueckner paragraph [0095]: “The model execution request may specify the execution mode (batch, online or local), the input data to be used for the model run (which may be produced using a specified data source or recipe in some cases), the type of output (e.g., a prediction or an evaluation) that is desired, and/or optional parameters (such as desired model quality targets, minimum input record group sizes to be used for online predictions, and so on)”and paragraph [0110]: “A run-time recipe manager 1110 of the MLS may be responsible for the scheduling of recipe executions in some embodiments, e.g., in response to the equivalent of an “executeRecipe” API specifying an input data set. In the depicted embodiment, two execution requests 1171A and 1171B for the same recipe R1 are shown, with respective input data sets IDS1 and IDS2… Respective outputs 1185A and 1185B may be produced by the application of the recipe R1 on IDS1 and IDS2 in the depicted embodiment. Depending on the details of the recipe R1, the outputs 1185A may represent either data that is to be used as input for a model, or a result of a model (such as a prediction or evaluation)… a recipe may be applied asynchronously with respect to the execution request” [i.e., output data sets IDS1 and IDS2 are from separate execution requests (jobs) that that trigger the execution for producing input for a model or the evaluation/prediction of a model (processing one data-set while post-processing output data) independently (see, e.g., Brueckner, Fig. 11)]), and wherein the first set and the second set are processed at least partially in parallel (see, e.g., Brueckner paragraph [0079]: “In the other type of dependency, the execution of one job Jp may be started as soon as some specified phase of another job Jq is completed. This latter type of dependency may be termed a “partial dependency”, and is indicated in FIG. 5 by the “dependsOnPartial” parameter. For example, J3 depends on the partial completion of J2, and J4 depends on the partial completion of J3” and paragraph [0082]: “The portion of J2 upon which J3 depends completes at time t4 in the illustrated example, and the execution of J3 therefore begins (in parallel with the execution of the remaining portion of J2) at t4” [i.e., the jobs that correspond to recipe executions and their respective data sets, are processed in parallel during overlapping phases based on partial dependencies]). Although Brueckner substantially teaches the claimed invention, Brueckner fails to explicitly teach the limitations and running, by the ML controller using an ML runtime, the at least one ML model, wherein the running includes: executing the at least one ML model by the ML runtime, In the same field, analogous art Kirsche teaches and running, by the ML controller using an ML runtime, the at least one ML model (see, e.g., Kirsche Col. 25 lines 59-65: "the machine learning model lifecycle management system provides a core library that encapsulates many runtime functionalities of the platform, which may be embedded in other architectures. In some embodiments, clients may choose to use the machine learning model lifecycle management system as a service, or integrate the runtime library into their own systems" and Col. 29 lines 53-61: "the machine learning model lifecycle management system may launch a PySpark job on the Spark cluster. For each partition, the Python process may pass the input data to the R runtime via a library that enables Python/R interoperation. The R runtime then may parse the data into a dataframe, invoke a score function from an R library provided by the machine learning model lifecycle management system, and pass the score back to the Python process"), wherein the running includes: executing the at least one ML model by the ML runtime (see, e.g., Kirsche Col. 29 lines 57-61: "The R runtime then may parse the data into a dataframe, invoke a score function from an R library provided by the machine learning model lifecycle management system, and pass the score back to the Python process" [i.e., the score function invoked from an R library is used to produce a prediction by a trained ML model (executing an ML model by the R runtime)]), Brueckner and Kirsche are analogous art because they are both directed to computing arrangements using knowledge-based models (see, e.g., Brueckner, paragraph [0053], Kirsche, Col. 5 lines 28-43). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Brueckner to incorporate the teachings of Kirsche to run and execute ML models using an ML runtime. Doing so would have allowed Brueckner to use Kirsche's method in order to “leverage a scalable architecture to execute machine learning tasks on very large datasets [which] may reduce the computing resources and time necessary to release new machine learning models and/or improvements to existing machine learning models into production”, as suggested by Kirsche (see, e.g., Kirsche, Col. 6 lines 14-16). Regarding Claim 2, as discussed above, Brueckner in view of Kirsche teaches the method of claim 1. Brueckner further teaches wherein the at least one request includes an ML model reference ID to distinguish between ML models (see, e.g., Brueckner paragraph [0088]: “An alias may comprise an alias name or identifier, and a pointer to a model (e.g., alias 640A points to model 630B, and alias 640B points to model 630D in the depicted embodiment). As used herein, the phrase “publishing a model” refers to making a particular version of a model executable by a set of users by reference to an alias name or identifier…Thus, non-expert users may not have to change anything in the way that they have been using the aliases, while benefiting from the improvements. In some embodiments, alias users may be able to submit a query to learn when the underlying model was last changed, or may be notified when they request an execution of an alias that the underlying model has been changes since the last execution” [i.e., the request of an execution, includes an alias that has an ML Model reference identifier]). Regarding Claim 3, as discussed above, Brueckner in view of Kirsche teaches the method of claim 2. Brueckner further teaches wherein: at least two requests are received, the at least two requests are received from a same ML application (see, e.g., Brueckner paragraphs [0056]: “For example, a client may submit respective requests for tasks T2 and T3 before an earlier-submitted task T1 completes, even though the execution of T2 depends at least partly on the results of T1, and the execution of T3 depends at least partly on the results of T2”, and [0061]: “MLS may implement a set of programmatic interfaces 161 (e.g., APIs, command-line tools, web pages, or standalone GUIs) that can be used by clients 164 (e.g., hardware or software entities owned by or assigned to customers of the MLS) to submit requests 111 for a variety of machine learning tasks or operations”), and the at least two requests relate to different ML models (see, e.g., Brueckner paragraphs [0110]: “In at least some embodiments, a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later. The execution of a recipe may be dependent on other jobs in some cases—e.g., upon the completion of jobs associated with input record handling” and [0168]: “At time t1, a training job J1 of a training-and-evaluation iteration TEI1 for a model M1 is begun…At time t2, a training job J2 may be scheduled at a server set SS2, for a training-and-evaluation iteration TEI2 for a different model M2”). Regarding Claim 4, as discussed above, Brueckner in view of Kirsche teaches the method of claim 3. Brueckner further teaches generating an ML model runner instance ID to distinguish between different ML runner instances (see, e.g., Brueckner paragraphs [0084]: “As shown, in the depicted embodiment, MLS artifacts 601 may include, among others…modifiable or in-development models 630, and published models or aliases 640. In some implementations the MLS may generate a respective unique identifier for each instance of at least some of the types of artifacts shown and provide the identifiers to the clients. The identifiers may subsequently be used by clients to refer to the artifact (e.g., in subsequent API calls, in status queries, and so on)” and [0088]: “Accordingly, the artifacts representing models may belong to one of two categories in some embodiments: modifiable models 630, and published models or aliases 640. An alias may comprise an alias name or identifier, and a pointer to a model… the phrase “publishing a model” refers to making a particular version of a model executable by a set of users by reference to an alias name or identifier” [i.e., published models executable ML models, corresponding with ML runner instances, that each have a unique identifier (ML model runner instance ID) generated by the MLS]). Regarding Claim 5, as discussed above, Brueckner in view of Kirsche teaches the method of claim 4. Brueckner further teaches wherein the at least one request includes an ML model execution ID to distinguish between executions (see, e.g., Brueckner paragraph [0084]: “As shown, in the depicted embodiment, MLS artifacts 601 may include, among others, … model predictions 608, evaluations 610…. In some implementations the MLS may generate a respective unique identifier for each instance of at least some of the types of artifacts shown and provide the identifiers to the clients. The identifiers may subsequently be used by clients to refer to the artifact (e.g., in subsequent API calls, in status queries, and so on)”). Regarding Claim 6, as discussed above, Brueckner in view of Kirsche teaches the method of claim 5. Brueckner further teaches wherein at least two requests are received and the at least two requests relate to the same ML model (see, e.g., Brueckner paragraphs [0095]: “For example, a client may indicate via a parameter of the model execution/creation request that up to 100 prediction requests per day are expected on data sets of 1 million records each, and the servers selected for the model may be chosen to handle the specified request rate operations corresponding to the client request, such as reading/ingesting a data set, generating a set of statistics, performing feature processing, executing a model, etc.”). Regarding Claim 7, as discussed above, Brueckner in view of Kirsche teaches the method of claim 6. Brueckner further teaches wherein the at least one of the ML model reference ID, the ML runner instance ID, or the ML model execution ID is used to associate sets of queued input data with the execution of the respective ML model (see, e.g., Brueckner paragraphs [0102]: “In FIG. 10a, a creation interface (e.g., an API similar to “createDataSource” or “createModel”) is used as an example… a request to create a new instance of an entity type ET1 may be received... The request may indicate an identifier ID1, selected by the client, which is to be used for the new instance” and [0103]: “a job object may be added to a job queue to perform additional operations corresponding to the client request, such as reading/ingesting a data set, generating a set of statistics, performing feature processing, executing a model, etc.”). Regarding Claim 8, as discussed above, Brueckner in view of Kirsche teaches the method of claim 1. Brueckner further teaches outputting an indication to the at least one ML application once execution of the ML model has resulted in output data (see, e.g., Brueckner paragraph [0057]: “The MLS may also be responsible in such embodiments… gathering results, providing/saving the results in an appropriate destination… and… providing status updates or responses to the requesting clients” and paragraph [0083]: “At t5, the portion of J3 on which J4 depends may be complete, and the client may be notified accordingly… The client is notified regarding the completion of each of the jobs corresponding to the respective API invocations API1-API4 in the depicted example scenario”). Regarding Claim 9, as discussed above, Brueckner in view of Kirsche teaches the method of claim 1. Brueckner further teaches wherein receiving the at least one request includes: receiving at least one load request that initializes the at least one ML model, (see, e.g., Brueckner paragraph [0095]: “a client 164 of the MLS may submit a model execution request 812 to the MLS control plane 180 via a programmatic interface 861… For online mode 867, the model may be mounted (e.g., configured with a network address)… clients may optionally specify expected workload levels for a model that is to be instantiated in online mode” [i.e., clients can specify instantiation configurations for a model (load requests) in a model execution request]). and receiving at least one execution request that causes the at least one ML model to be executed (see, e.g., Brueckner paragraph [0095]: “a client 164 of the MLS may submit a model execution request 812 to the MLS control plane 180 via a programmatic interface 861. . The model execution request may specify the execution mode (batch, online or local)… In response the MLS may generate a plan for model execution and select the appropriate resources to implement the plan” [i.e., the execution request causes a model execution]). Regarding Claim 10, as discussed above, Brueckner in view of Kirsche teaches the method of claim 1. Brueckner further teaches receiving ML model information from the at least one ML application (see, e.g., Brueckner paragraphs [0095]: “The model execution request may specify the execution mode… the input data to be used for the model run… the type of output (e.g., a prediction or an evaluation) that is desired, and/or optional parameters (such as desired model quality targets, minimum input record group sizes… and so on)” and [0102]: “a request to create a new instance of an entity type ET1 may be received from a client C1 at the MLS… The MLS may generate a representation IPR1 of the input parameters included in the client's invocation of the programmatic interface” [i.e., ML model information is encapsulated in the representation IPR1]), wherein the ML model information includes at least an ML model definition and ML model metadata having information specifying the ML runtime to execute the ML model (see, e.g., Brueckner paragraph [0095]: “The model execution request may specify…, and/or optional parameters (such as desired model quality targets, minimum input record group sizes to be used for online predictions, and so on)…" [i.e., the optional parameters are descriptive data about the operational characteristics and performance details (metadata) of the ML model], and "For local mode, the MLS may package up an executable local version 843 of the model (where the details of the type of executable that is to be provided, such as the type of byte code or the hardware architecture on which the model is to be run, may have been specified in the execution request 812”); and generating an ML runner instance configured to interact with the ML runtime to cause the ML runtime to execute the ML model (see, e.g., Brueckner paragraph [0095]: "In response the MLS may generate a plan for model execution and select the appropriate resources to implement the plan… For online mode 867, the model may be mounted (e.g., configured with a network address) to which data records may be streamed…" [i.e., the mounting and configuration of the model (ML runner instance/executable) prepare it for interaction with the ML online execution environment (ML runtime) that interacts with the mounted ML model], "In at least one embodiment, clients may optionally specify expected workload levels for a model that is to be instantiated in online mode…For local mode, the MLS may package up an of the executable local version 843 model (where the details of the type of executable that is to be provided, such as the type of byte code or the hardware architecture on which the model is to be run, may have been specified in the execution request 812) and transmit the local model to the client" [i.e., an executable model is packaged and transmitted to the client for further interaction and execution of the client’s runtime]). Regarding Independent Claim 12, Brueckner teaches A non-transitory computer-readable medium comprising instructions including (see, e.g., Brueckner paragraphs [0233]: “Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device 9000 via I/O interface 9030. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device 9000 as system memory 9020 or another type of memory”, [0056] and [0059]): receiving, at an ML controller from at least one ML application, at least one request to run at least one ML model (see, e.g., Brueckner paragraphs [0056-0057]: “the MLS may take care of ensuring that a given task is scheduled for execution only when its dependencies (if any dependencies exist) have been met…The MLS may be responsible for ensuring that the dependencies of a given job have been met before the corresponding operations are initiated. The MLS may also be responsible in such embodiments for generating a processing plan for each job, identifying the appropriate set of resources (e.g., CPUs/cores, storage or memory) for the plan, scheduling the execution of the plan, gathering results” and paragraph [0174]: “FIG. 32 is a flow diagram illustrating aspects of operations that may be performed at a machine learning service in response to a request for training and evaluation iterations of a machine learning model, according to at least some embodiments. As shown in element 3201, a request to perform one or more TEIs (training-and-evaluation iterations, such as cross-validation iterations) may be received via a programmatic interface such as an MLS I/O library API” [i.e., the MLS (ML controller) receives requests to run training on ML models]); queueing, at the ML controller, pre-processed input data for the at least one ML model (see, e.g., Brueckner paragraph [0110]: “A run-time recipe manager 1110 of the MLS may be responsible for the scheduling of recipe executions in some embodiments… Depending on the details of the recipe R1, the outputs 1185A may represent either data that is to be used as input for a model, or a result of a model (such as a prediction or evaluation). In at least some embodiments, a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later”); starting processing of a second set of queued input data with the at least one ML model (see, e.g., Brueckner paragraph [0090]: “the MLS may support recurring scheduling of related jobs. For example, a client may create an artifact such as a model, and may want that same model to be re-trained and/or re-executed for different input data sets (e.g., using the same configuration of resources for each of the training or prediction iterations) at specified points in time… A respective job may be placed in the MLS job queue for each recurring training or execution iteration” [i.e., the different input data sets imply a second input data set, starting based on the job queue]), and subsequent to the starting processing of the second set but before conclusion of the processing the second set (see, e.g., Brueckner paragraph [0133]: “In some embodiments the machine learning model may support parallelized training of models, in which for example respective (and potentially partially overlapping) subsets of an input data set may be used to train a given model in parallel. The duration of one training operation may overlap at least partly with the duration of another in such a scenario, and the input data set may be partitioned for the parallel training sessions using a chunk-level filtering operation” and paragraph [0146]: “An MLS request handler 180 may receive a record extraction request 2310 indicating a sequence of filtering operations that are to be performed on a specified data set located at one or more data sources, such as some combination of shuffling, splitting, sampling, partitioning (e.g., for parallel computations such as map-reduce computations, or for model training operations/sessions that overlap with each other in time and may overlap with each other in the training sets used), and the like” [i.e., the overlapping operations in time of multiple subsets of input data functions as processing a second set subsequent to the starting processing of the set and before the conclusion of said processing starting another processing step]), starting post-processing output data resulting from processing a first set of the queued input data (see, e.g., Brueckner paragraph [0110]: “In the depicted embodiment, two execution requests 1171A and 1171B for the same recipe R1 are shown, with respective input data sets IDS1 and IDS2…a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later. The execution of a recipe may be dependent on other jobs in some cases—e.g., upon the completion of jobs associated with input record handling (decryption, decompression, splitting of the data set into training and test sets, etc.)” [i.e., execution of a recipe (starting post-processing) is dependent on another job (processing a first set of a queued input) functioning as a post-processing resulting from a prior queued processing of input] and paragraph [0112]: “FIG. 12 illustrates example sections of a recipe, according to at least some embodiments. In the depicted embodiment, the text of a recipe 1200 may comprise four separate sections—a group definitions section 1201, an assignments section 1204, a dependencies section 1207, and an output/destination section 1210. In some implementations, only the output/destination section may be mandatory” [i.e., the processed recipe executions result in output data]) so that an overall time required to process and post-process the first and second sets is reduced (see, e.g., Brueckner paragraph [0079]: “In the other type of dependency, the execution of one job Jp may be started as soon as some specified phase of another job Jq is completed. This latter type of dependency may be termed a “partial dependency”, and is indicated in FIG. 5 by the “dependsOnPartial” parameter” [i.e., the dependsOnPartial parameter enables starting a job as soon as a phase of another job is complete, reducing the overall time execute queued recipe executions (that process and post-process first and second data sets)] and paragraph [0082]: “As indicated by J3's ‘dependsOnPartial’ parameter value, J3 can be started when a specified phase or subset of J2's work is complete in the depicted example. The portion of J2 upon which J3 depends completes at time t4 in the illustrated example, and the execution of J3 therefore begins (in parallel with the execution of the remaining portion of J2) at t4” [i.e., the MLS job scheduler optimizes the amount of recipes are executed and executes jobs in an overlapping manner]), wherein the processing the second set is independent from the post-processing the output data (see, e.g., Brueckner paragraph [0095]: “The model execution request may specify the execution mode (batch, online or local), the input data to be used for the model run (which may be produced using a specified data source or recipe in some cases), the type of output (e.g., a prediction or an evaluation) that is desired, and/or optional parameters (such as desired model quality targets, minimum input record group sizes to be used for online predictions, and so on)”and paragraph [0110]: “A run-time recipe manager 1110 of the MLS may be responsible for the scheduling of recipe executions in some embodiments, e.g., in response to the equivalent of an “executeRecipe” API specifying an input data set. In the depicted embodiment, two execution requests 1171A and 1171B for the same recipe R1 are shown, with respective input data sets IDS1 and IDS2… Respective outputs 1185A and 1185B may be produced by the application of the recipe R1 on IDS1 and IDS2 in the depicted embodiment. Depending on the details of the recipe R1, the outputs 1185A may represent either data that is to be used as input for a model, or a result of a model (such as a prediction or evaluation)… a recipe may be applied asynchronously with respect to the execution request” [i.e., output data sets IDS1 and IDS2 are from separate execution requests (jobs) that that trigger the execution for producing input for a model or the evaluation/prediction of a model (processing one data-set while post-processing output data) independently (see, e.g., Brueckner2, Fig. 11)]), and wherein the first set and the second set are processed at least partially in parallel (see, e.g., Brueckner paragraph [0079]: “In the other type of dependency, the execution of one job Jp may be started as soon as some specified phase of another job Jq is completed. This latter type of dependency may be termed a “partial dependency”, and is indicated in FIG. 5 by the “dependsOnPartial” parameter. For example, J3 depends on the partial completion of J2, and J4 depends on the partial completion of J3” and paragraph [0082]: “The portion of J2 upon which J3 depends completes at time t4 in the illustrated example, and the execution of J3 therefore begins (in parallel with the execution of the remaining portion of J2) at t4” [i.e., the jobs that correspond to recipe executions and their respective data sets, are processed in parallel during overlapping phases based on partial dependencies]). Although Brueckner substantially teaches the claimed invention, Brueckner fails to explicitly teach the limitations and running, by the ML controller using an ML runtime, the at least one ML model, wherein the running includes: executing the at least one ML model by the ML runtime, In the same field, analogous art Kirsche teaches and running, by the ML controller using an ML runtime, the at least one ML model (see, e.g., Kirsche Col. 25 lines 59-65: "the machine learning model lifecycle management system provides a core library that encapsulates many runtime functionalities of the platform, which may be embedded in other architectures. In some embodiments, clients may choose to use the machine learning model lifecycle management system as a service, or integrate the runtime library into their own systems" and Col. 29 lines 53-61: "the machine learning model lifecycle management system may launch a PySpark job on the Spark cluster. For each partition, the Python process may pass the input data to the R runtime via a library that enables Python/R interoperation. The R runtime then may parse the data into a dataframe, invoke a score function from an R library provided by the machine learning model lifecycle management system, and pass the score back to the Python process"), wherein the running includes: executing the at least one ML model by the ML runtime (see, e.g., Kirsche Col. 29 lines 57-61: "The R runtime then may parse the data into a dataframe, invoke a score function from an R library provided by the machine learning model lifecycle management system, and pass the score back to the Python process" [i.e., the score function invoked from an R library is used to produce a prediction by a trained ML model (executing an ML model by the R runtime)]), Brueckner and Kirsche are analogous art because they are both directed to computing arrangements using knowledge-based models (see, e.g., Brueckner, paragraph [0053], Kirsche, Col. 5 lines 28-43). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Brueckner to incorporate the teachings of Kirsche to run and execute ML models using an ML runtime. Doing so would have allowed Brueckner to use Kirsche's method in order to “leverage a scalable architecture to execute machine learning tasks on very large datasets [which] may reduce the computing resources and time necessary to release new machine learning models and/or improvements to existing machine learning models into production”, as suggested by Kirsche (see, e.g., Kirsche, Col. 6 lines 14-16). Regarding Independent Claim 13, Brueckner teaches A computing device for processing of data with machine learning (ML) models, the computing device comprising: a memory storing computer-readable instructions; and an ML controller configured to3 (see, e.g., Brueckner paragraphs [0227-0228]: “The MLS may support isolated execution of certain types of operations for which enhanced security is required. The MLS may be used for, and may incorporate techniques optimized for, a variety of problem domains covering both supervised and unsupervised learning, such as, fraud detection, financial asset price predictions, insurance analysis, weather prediction, geophysical analysis, image/video processing, audio processing, natural language processing, medicine and bioinformatics and so on… FIG. 46 illustrates such a general-purpose computing device 9000. In the illustrated embodiment, computing device 9000 includes one or more processors 9010 coupled to a system memory 9020 (which may comprise both non-volatile and volatile memory modules) via an input/output (I/O) interface 9030. Computing device 9000 further includes a network interface 9040 coupled to I/O interface 9030”, [0056] and [0059]): receiving, at an ML controller from at least one ML application, at least one request to run at least one ML model (see, e.g., Brueckner paragraphs [0056-0057]: “the MLS may take care of ensuring that a given task is scheduled for execution only when its dependencies (if any dependencies exist) have been met…The MLS may be responsible for ensuring that the dependencies of a given job have been met before the corresponding operations are initiated. The MLS may also be responsible in such embodiments for generating a processing plan for each job, identifying the appropriate set of resources (e.g., CPUs/cores, storage or memory) for the plan, scheduling the execution of the plan, gathering results…”, [0174]: “FIG. 32 is a flow diagram illustrating aspects of operations that may be performed at a machine learning service in response to a request for training and evaluation iterations of a machine learning model, according to at least some embodiments. As shown in element 3201, a request to perform one or more TEIs (training-and-evaluation iterations, such as cross-validation iterations) may be received via a programmatic interface such as an MLS I/O library API” [i.e., the MLS (ML controller) receives requests to run training on ML models]); queueing, at the ML controller, pre-processed input data for the at least one ML model (see, e.g., Brueckner paragraph [0110]: “A run-time recipe manager 1110 of the MLS may be responsible for the scheduling of recipe executions in some embodiments… Depending on the details of the recipe R1, the outputs 1185A may represent either data that is to be used as input for a model, or a result of a model (such as a prediction or evaluation). In at least some embodiments, a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later”); starting processing of a second set of queued input data with the at least one ML model (see, e.g., Brueckner paragraph [0090]: “the MLS may support recurring scheduling of related jobs. For example, a client may create an artifact such as a model, and may want that same model to be re-trained and/or re-executed for different input data sets (e.g., using the same configuration of resources for each of the training or prediction iterations) at specified points in time… A respective job may be placed in the MLS job queue for each recurring training or execution iteration” [i.e., the different input data sets imply a second input data set, starting based on the job queue]), and subsequent to the starting processing of the second set but before conclusion of the processing the second set (see, e.g., Brueckner paragraph [0133]: “In some embodiments the machine learning model may support parallelized training of models, in which for example respective (and potentially partially overlapping) subsets of an input data set may be used to train a given model in parallel. The duration of one training operation may overlap at least partly with the duration of another in such a scenario, and the input data set may be partitioned for the parallel training sessions using a chunk-level filtering operation” and paragraph [0146]: “An MLS request handler 180 may receive a record extraction request 2310 indicating a sequence of filtering operations that are to be performed on a specified data set located at one or more data sources, such as some combination of shuffling, splitting, sampling, partitioning (e.g., for parallel computations such as map-reduce computations, or for model training operations/sessions that overlap with each other in time and may overlap with each other in the training sets used), and the like” [i.e., the overlapping operations in time of multiple subsets of input data functions as processing a second set subsequent to the starting processing of the set and before the conclusion of said processing starting another processing step]), starting post-processing output data resulting from processing a first set of the queued input data (see, e.g., Brueckner paragraph [0110]: “In the depicted embodiment, two execution requests 1171A and 1171B for the same recipe R1 are shown, with respective input data sets IDS1 and IDS2…a recipe may be applied asynchronously with respect to the execution request—e.g., as described earlier, a job object may be inserted into a job queue in response to the execution request, and the execution may be scheduled later. The execution of a recipe may be dependent on other jobs in some cases—e.g., upon the completion of jobs associated with input record handling (decryption, decompression, splitting of the data set into training and test sets, etc.)” [i.e., execution of a recipe (starting post-processing) is dependent on another job (processing a first set of a queued input) functioning as a post-processing resulting from a prior queued processing of
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Jun 15, 2025
Non-Final Rejection — §103
Aug 14, 2025
Interview Requested
Sep 10, 2025
Examiner Interview Summary
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103
Mar 26, 2026
Request for Continued Examination
Apr 01, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574782
NETWORK CONTROLLED SMALL GAP (NCSG) CONFIGURATIONS TO REDUCE INTERRUPTIONS DUE TO INTRA-RAT BANDWIDTH PART (BWP) TRANSITIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12554981
CLASSIFIER PROCESSING USING MULTIPLE BINARY CLASSIFIER STAGES
2y 5m to grant Granted Feb 17, 2026
Patent 12470907
INITIAL ATTACH PRIORIZATION METHOD AND SYSTEM
2y 5m to grant Granted Nov 11, 2025
Patent 12426128
CROSS-CARRIER SCHEDULING TECHNIQUES FOR MULTIPLE DISCONTINUOUS RECEPTION GROUPS
2y 5m to grant Granted Sep 23, 2025
Patent 11972343
ENCODING AND DECODING INFORMATION
2y 5m to grant Granted Apr 30, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
77%
With Interview (+10.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 274 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month