Prosecution Insights
Last updated: April 19, 2026
Application No. 17/950,871

SYSTEMS AND METHODS FOR INTEGRATED ORCHESTRATION OF MACHINE LEARNING OPERATIONS

Non-Final OA §101§103§112
Filed
Sep 22, 2022
Examiner
MRABI, HASSAN
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Rps Canada Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
285 granted / 363 resolved
+23.5% vs TC avg
Strong +32% interview lift
Without
With
+32.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Application’s Communication received on 09/22/2022 for application number 17/950871. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawing, Abstract, Oath/Declaration, and Claims. Claims (1-10) and (11-20) are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/05/2024 and 12/22/2022 were filed prior to current Office Action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation "transmitting, by the computer, the input data to a host server hosting the machine learning model to be executed for the iteration" in line 3. The limitation includes the term “the iteration”, there is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claims (1-10) and (11-20) are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims (1-10) and (11-20) are drawn to a method each of which is within the four statutory categories (e.g., a process, a machine). Step 2A - Prong One: In prong one of step 2A, the claims are analyzed to evaluate whether they recite a judicial exception. Claim 1. receiving, by a computer, from a client device a request for an operation using client data; generating, by the computer, an execution pipeline having a plurality of machine learning models hosted on a plurality of host servers, each machine learning model of the execution pipeline selected based upon the request and the client data; formatting, by the computer, the client data as input data for a first machine learning model of the plurality of machine learning models; and iteratively executing, by the computer, the plurality of machine learning models of the execution pipeline, comprising formatting output data from a preceding machine learning model for a subsequent machine learning model of the plurality of machine learning models. The limitations recite “generating, by the computer, an execution pipeline having a plurality of …” which can be defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. For example, the claimed “generating” under its broadest reasonable interpretation when read in light of the specification encompasses orchestrating a process or the server to format or normalize the data according to a data specification and generating input data for subsequent models in the pipeline. Thus, the limitation is a mental process. The limitations recite “formatting, by the computer, the client data as input data for a first machine learning model of the plurality of machine learning models …” which can be defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. For example, the claimed “formatting” under its broadest reasonable interpretation when read in light of the specification encompasses modifying and updating the data in order to meet the machine learning model specification. Thus, the limitation is a mental process. The limitations recite “iteratively executing, by the computer, the plurality of machine learning models …” which can be defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. For example, the claimed “iteratively execution” and “formatting” under its broadest reasonable interpretation when read in light of the specification encompasses defining the execution steps and formatting data. Thus, the limitation is a mental process. Step 2A Prong 2: Claim 1 recites additional elements such as “machine learning models” and “receiving, by a computer, from a client device a request for an operation using client data” which are recited at a high level, the elements are merely reciting the words that pertain to a generic computer (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The “machine learning models” or “computer” are an additional elements amount to merely the words “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. The limitations do not integrate the judicial exception into a practical application. Dependent claims (2-10) and (12-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-10) and (12-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Step 2B: The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. The “machine learning models” and “receiving, by a computer, from a client device a request for an operation using client data” steps are considered insignificant extra solution activity. The limitations are mere data gathering and output using computer and machine learning models that is recited at a high level of generality and amount to processing input data using Artificial Intelligence that recited at high level of generality using a generic computer. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. Dependent claims (2-10) and (12-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-10) and (12-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Tong. US Patent Application US 12387132 B1 (hereinafter Tong) in view of Gold et al. US Patent Application Publication US 20190121673 A1 (hereinafter Gold). Regarding claim 1, Tong teaches A computer-implemented method comprising: receiving, by a computer, from a client device a request for an operation using client data ([0039], [0046] wherein Tong describes receiving request for formatting data according to models training system) generating, by the computer, an execution pipeline having a plurality of machine learning models hosted on a plurality of host servers, each machine learning model of the execution pipeline selected based upon the request and the client data (Abstract, [0015-0016] wherein Tong describes transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers and provides techniques for orchestrating building and executing machine learning pipelines for data sets) formatting, by the computer, the client data as input data for a first machine learning model of the plurality of machine learning models ([0039], [0046], [0085] wherein Tong incorporates formatting messages and requests according to various types of interfaces implemented by models training system). Tong does not teach and iteratively executing, by the computer, the plurality of machine learning models of the execution pipeline, comprising formatting output data from a preceding machine learning model for a subsequent machine learning model of the plurality of machine learning models. However in analogous art of integrated orchestration of machine learning operations, Gold teaches iteratively executing, by the computer, the plurality of machine learning models of the execution pipeline, comprising formatting output data from a preceding machine learning model for a subsequent machine learning model of the plurality of machine learning models ([0132], [0263], [0266] wherein Gold describes using algorithms that literately process data wherein Gold utilizes the input data as input into the machine learning algorithms that are being executed on the GPU servers, wherein different machine learning models that may require input data that is in different formats, contains different types of data, and so on. For example, a first machine learning model may utilize a vector as input while a second machine learning model may utilize a matrix as input. Gold describes transforming unstructured data into structure data by extracting information from the unstructured format and populating the data in a structured format, transforming structured data in a first format to a second format that is expected by the one or more machine learning models). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Tong with Gold by incorporating the method of and iteratively executing, by the computer, the plurality of machine learning models of the execution pipeline, comprising formatting output data from a preceding machine learning model for a subsequent machine learning model of the plurality of machine learning models of Gold into the method of generating, by the computer, an execution pipeline having a plurality of machine learning models hosted on a plurality of host servers, each machine learning model of the execution pipeline selected based upon the request and the client data of Tong for the purpose of incorporating an ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently (Gold: [0137]). Regarding claim 2, Tong as modified by Gold teaches for each iteration of at least one iteration: transmitting, by the computer, the input data to a host server hosting the machine learning model to be executed for the iteration (FIG. 13, Abstract, [0132], [0136], [0195] wherein Gold processed dataset iteratively by transmitting dataset to a server and executing models selectively). Regarding claim 3, Tong as modified by Gold teaches identifying, by the computer, a data-transfer requirement for the host server of the machine learning model, wherein the computer transmits the input data to the host server according to the data-transfer requirement ([0039], [0046] wherein Tong formats request according to various types interfaces implemented by the models wherein the interfaces may support various requests to manually request and/or configure deployment. For example, requests to search for models or other artifacts may be received via the interface, and handled via model search and model indexing. Requests to deploy pipelines or other artifacts may be received. For example, requests (or deployment configuration specified in other requests like build/execute request), may identify an endpoint deployment, where trained pipelines can be served as real-time endpoints or as batch processes for real-time and batch inference. A cataloged assess may be specified (instead of a direct deployment), in some embodiment to be quickly located and, when desirable, deployed or used for further pipeline building/training. For example, embeddings can be used to transform a graph data set into a low dimension vector that can be joined with tabular data sets, allowing the graph data to be easily reused in by other machine learning systems that can handle tabular data. In such embodiments, further information such as access information and/or data manipulation descriptions (e.g., how to join) may be provided as part of serving a machine learning model). ([0195] wherein Gold describes a lifecycle that includes A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data). Regarding claim 4, Tong as modified by Gold teaches receiving, by the computer, the output data resulting from the machine learning model of the iteration from a host server hosting the machine learning model (FIG. 13, [0266-0267] wherein Gold describes steps of processing dataset through steps of executing the data after identifying the learning models to be executing, generates transformed dataset, generates an output, sending the output to a server, then to machine learning model (1316), wherein the steps repeats as illustrated in FIG. 13). Regarding claim 5, Tong as modified by Gold teaches for each iteration of at least one iteration: generating, by the computer, the input data for the machine learning model of the iteration based upon formatting the output data of the preceding machine learning model of a preceding iteration (FIG. 13, [0266-0267] wherein Gold describes steps of processing dataset through steps of executing the data after identifying the learning models to be executing, generates transformed dataset as an input for next steps, generates an output, sending the output to a server, then to machine learning model (1316), wherein the steps repeats as illustrated in FIG. 13). Regarding claim 6, Tong as modified by Gold teaches identifying, by the computer, a data specification for the machine learning model of a host server, wherein the input data is formatted by the computer for the machine learning model according to the data specification ([0039], [0046] wherein Tong formats request according to various types interfaces implemented by the models wherein the interfaces may support various requests to manually request and/or configure deployment. For example, requests to search for models or other artifacts may be received via the interface, and handled via model search and model indexing. Requests to deploy pipelines or other artifacts may be received. For example, requests (or deployment configuration specified in other requests like build/execute request), may identify an endpoint deployment, where trained pipelines can be served as real-time endpoints or as batch processes for real-time and batch inference. A cataloged assess may be specified (instead of a direct deployment), in some embodiment to be quickly located and, when desirable, deployed or used for further pipeline building/training. For example, embeddings can be used to transform a graph data set into a low dimension vector that can be joined with tabular data sets, allowing the graph data to be easily reused in by other machine learning systems that can handle tabular data. In such embodiments, further information such as access information and/or data manipulation descriptions (e.g., how to join) may be provided as part of serving a machine learning model). ([0195] wherein Gold describes a lifecycle that includes A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data). Regarding claim 7, Tong as modified by Gold teaches identifying, by the computer, a data specification for the machine learning model of a host server; and selecting, by the computer, the machine learning model of the execution pipeline based upon a type of file of the client data received from the client device (FIG. 13, [0136-0137], [0244], [0247-0248] wherein Gold initiates steps of collecting the labeled data that is crucial for training an accurate AI model wherein the steps includes full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data before executing pipeline, wherein the steps includes transforming dataset in a format convenient for models), ([0039], [0046] wherein Tong describes receiving request for formatting data according to models training system). Regarding claim 8, Tong as modified by Gold teaches identifying, by the computer, a data specification for the machine learning model of a host of the machine learning model; and selecting, by the computer, the machine learning model of the execution pipeline based upon a type of the output data of a next machine learning model selected for the execution pipeline (FIG. 13, [0136-0137], [0244], [0247-0248] wherein Gold initiates steps of collecting the labeled data that is crucial for training an accurate AI model wherein the steps includes full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data before executing pipeline, wherein the steps includes transforming dataset in a format convenient for models), ([0039], [0046] wherein Tong describes receiving request for formatting data according to models training system). Regarding claim 9, Tong as modified by Gold teaches determining, by the computer, an order of execution for the plurality of machine learning models of the execution pipeline based upon the request from the client device (FIG. 13, [0136-0137], [0244], [0247-0248] wherein Gold initiates steps of collecting the labeled data that is crucial for training an accurate AI model wherein the steps includes full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data before executing pipeline, wherein the steps includes transforming dataset in a format convenient for models, wherein, as illustrated in FIG. 13, shows the requests being processed and sent to the server), ([0039], [0046] wherein Tong describes receiving request for formatting data according to models training system). Regarding claim 10, Tong as modified by Gold teaches wherein at least two machine learning models are executed in parallel for a particular iteration using the input data ([0138], [0196-0197] wherein Gold describes concurrent workloads and phases that includes executing models concurrently) Regarding claim 11, Tong teaches A system comprising: a computer comprising a processor configured to (Abstract). The claim is similar in scope to claim 1 therefore the claim is rejected under similar rationale. Regarding claim 12, the claim is similar in scope to claim 2 therefore the claim is rejected under similar rationale. Regarding claim 13, the claim is similar in scope to claim 3 therefore the claim is rejected under similar rationale. Regarding claim 14, the claim is similar in scope to claim 4 therefore the claim is rejected under similar rationale. Regarding claim 15, the claim is similar in scope to claim 5 therefore the claim is rejected under similar rationale. Regarding claim 16, the claim is similar in scope to claim 6 therefore the claim is rejected under similar rationale. Regarding claim 17, the claim is similar in scope to claim 7 therefore the claim is rejected under similar rationale. Regarding claim 18, the claim is similar in scope to claim 8 therefore the claim is rejected under similar rationale. Regarding claim 19, the claim is similar in scope to claim 9 therefore the claim is rejected under similar rationale. Regarding claim 20, the claim is similar in scope to claim 10 therefore the claim is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASSAN MRABI/Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 22, 2022
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579411
RESONATOR NETWORK BASED NEURAL NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12579710
Transforming Content Across Visual Mediums Using Artificial Intelligence and User Generated Media
2y 5m to grant Granted Mar 17, 2026
Patent 12554924
Computer-Implemented Methods and Systems for Generative Text Painting
2y 5m to grant Granted Feb 17, 2026
Patent 12547905
PROBABILISTIC ENTITY-CENTRIC KNOWLEDGE GRAPH COMPLETION
2y 5m to grant Granted Feb 10, 2026
Patent 12536782
METHOD AND APPARATUS FOR TRAINING CLASSIFICATION TASK MODEL, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+32.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month