Prosecution Insights
Last updated: April 19, 2026
Application No. 18/251,542

A SYSTEM AND METHOD FOR THE UNIFICATION AND OPTIMIZATION OF MACHINE LEARNING INFERENCE PIPELINES

Non-Final OA §101§102§103
Filed
May 03, 2023
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Saferide Technologies Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8, 12-21 and 26-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea of mathematical relationship without significantly more. The claims recite obtaining a ML pipeline, generating respective pipeline representations, merging to create a common representation, optimizing the common representation and generating a target model based on the common representation – wherein optimizing includes knowledge distillation and generating the target model using an autoencoder residual based on training data. This judicial exception is not integrated into a practical application because it is merely linked to the field of vehicles or computing in general. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements such as computer readable media and processing circuitry are generic computer parts. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 8, 13, 14, 21, 26 and 27 are rejected under 35 U.S.C. 102(a)(1) as being described by US20190108417A1 by Talagala et al (Tala). Tala teaches claims 1, 14 and 27. A system for unification of machine learning inference pipelines, the system comprising a processing circuitry configured to: (Tala fig. 5) obtain one or more machine learning inference pipelines, (Tala fig. 2a pipelines 206a-c) each comprised of a sequence of one or more data processing elements, and each having (a) at least one input provided to the respective data processing element, and (b) at least one output provided by the respective data processing element, wherein the output of a given data processing element of the data processing elements, is the input of a subsequent data processing element of the sequence, if any, and wherein at least one of the data processing elements is a trained machine learning model; (Tala para 48 “machine learning pipelines 202, 204, 206 a-c comprise various machine learning features, components, objects, modules, and/or the like to perform various machine learning operations such as algorithm training/inference, feature engineering, validations, scoring, and/or the like.” Tala para 72 “one or more inference pipelines 206 a-c for processing new input data 210 using the trained machine learning models, and one or more policy pipelines 202 for managing”) generate, for each of the machine learning inference pipelines, a respective pipeline representation comprising representations of the sequence, based on the data processing elements, the inputs of the data processing elements, and the outputs of the data processing elements; (Tala para 50 “each pipeline 202, 204, 206 a-c is associated with an analytic engine and executes on a specific analytic engine type for which the pipeline is 202, 204, 206 a-c configured. As used herein, an analytic engine comprises the instructions, code, functions, libraries, and/or the like for performing machine learning numeric computation and analysis. Examples of analytic engines may include Spark, Flink, TensorFlow, Caffe, Theano, and PyTorch. Pipelines 202, 204, 206 a-c developed for these engines may contain components provided in modules/libraries for the particular analytic engine (e.g., Spark-ML/MLlib for Spark, Flink-ML for Flink, and/or the like).” The representation is instructions code functions and libraries.) merge the plurality of machine learning inference pipeline representations into a common representation, representing the plurality of machine learning inference pipeline representations; (Tala para 51 “the ML management apparatus 104 logically groups the machine learning pipelines 202, 204, 206 a-c based on a desired objective, result, problem, and/or the like.”) optimize the common representation using one or more optimization schemes; and (Tala para 54 “the ML management apparatus 104 dynamically selects machine learning pipelines 202, 204, 206 a-c for an objecting [sic] when the objective is determined, received, and/or the like based on the characteristics, settings, and/or the like of the machine learning pipelines 202, 204, 206 a-c…. Furthermore, the different logical groupings of pipelines 202, 204, 206 a-c may be merged, combined, and/or the like based on the objective being analyzed.”) generate, based on the common representation, a target model, wherein the target model consumes less resources than the machine learning inference pipelines. (Tala para 94 “a training pipeline 204 trains 606 one or more machine learning models for analyzing the objective at one or more inference pipelines 206 a-c.” Tala para 85 “The various characteristics may include determining which logical learning layer 200, 225, 250 consumes the least amount of resources, produces the most accurate results, executes in the least amount of time, performs the best in terms of latency, and/or the like.” The layers make up pipelines, and the pipelines are grouped based on historical characteristics see above.) Tala teaches clams 8 and 21. The system of claim 1, wherein at least one of the data processing elements is a pre-processing element. (Tala para 48 “machine learning pipelines 202, 204, 206 a-c comprise various machine learning features, components, objects, modules, and/or the like to perform various machine learning operations such as algorithm training/inference, feature engineering, validations, scoring, and/or the like.” Feature engineering is pre-processing.) Tala teaches claims 13 and 26. The system of claim 1, wherein at least part of a first machine learning inference pipeline of the machine learning inference pipelines is designed to operate on a first framework and at least part of a second machine learning inference pipeline of the machine learning inference pipelines is designed to operate on a second framework, different than the first framework. (Tala para 49 “In various embodiments, each pipeline 202, 204, 206 a-c executes on a distinct or separate device.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 3, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over US20190108417A1 by Talagala et al (Tala), DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING by Han et al and Distilling the Knowledge in a Neural Network by Hinton et al. Claims 4, 5, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US20190108417A1 by Talagala et al (Tala), DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING by Han et al, Distilling the Knowledge in a Neural Network by Hinton et al and FITNETS: HINTS FOR THIN DEEP NETS by Romero et al. Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US20190108417A1 by Talagala et al (Tala), DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING by Han et al, Distilling the Knowledge in a Neural Network by Hinton et al, FITNETS: HINTS FOR THIN DEEP NETS by Romero et al and US20200301799A1 to Manivasagam et al (Mani). Claims 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US20190108417A1 by Talagala et al (Tala), DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING by Han et al, Distilling the Knowledge in a Neural Network by Hinton et al, FITNETS: HINTS FOR THIN DEEP NETS by Romero et al and Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge by Kang et al. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over US20190108417A1 by Talagala et al (Tala) and US20200301799A1 to Manivasagam et al (Mani). Tala teaches claims 2 and 15. The system of claim 1, wherein the optimization schemes include one or more of: (Tala para 54 “the ML management apparatus 104 dynamically selects machine learning pipelines 202, 204, 206 a-c for an objecting [sic] when the objective is determined, received, and/or the like based on the characteristics, settings, and/or the like of the machine learning pipelines 202, 204, 206 a-c…. Furthermore, the different logical groupings of pipelines 202, 204, 206 a-c may be merged, combined, and/or the like based on the objective being analyzed.”) Tala doesn’t teach optimizing by pruning or quantizing. However, Han teaches (a) quantization; (b) pruning; or (Han title “COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION”) Han, Tala and the claims are all directed to optimizing neural networks. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to prune or quantize because “[p]runing, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5.” Han abs. Tala doesn’t teach knowledge distillation. However, Hinton teaches knowledge distillation. (Hinton abs “improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model.”) Tala, Hinton and the claims all optimize a neural network. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to distill knowledge because distilled NNs “can be trained rapidly and in parallel.” Hinton abs. Hinton teaches claims 3 and 16. The system of claim 2, wherein the knowledge distillation utilizes teacher-student models. (Hinton abs “We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.” The full model is the teacher. Specialist models are students. They also call the full model the cumbersome model, and the specialist model the distilled model.) Tala teaches claims 4 and 17. The system of claim 3, wherein the processing circuitry is further configured to:… common representation… (Tala para 51 “the ML management apparatus 104 logically groups the machine learning pipelines 202, 204, 206 a-c based on a desired objective, result, problem, and/or the like.”) Tala doesn’t teach a student teacher distillation. However, Hinton teaches how to execute a teacher model based on the (Hinton sec. 2 p. 3 “knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is used when training the distilled model, but after it has been trained it uses a temperature of 1. When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels.” Transfer set is the training set, each case in the transfer set is the training results set (labels).) wherein generating the target model is performed by training the target model as a student model based on the training set, the training results set, (Hinton sec. 2 p. 3 “knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is used when training the distilled model, but after it has been trained it uses a temperature of 1. When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels.” Transfer set is the training set, each case in the transfer set is the training results set (labels).) Hinton doesn’t teach intermediate results. However, Romero teaches training on intermediate results set. (Romero abs “training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher…”) Romero, Hinton, Tala and the claims are all about optimizing neural networks. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to train on intermediate results “to improve the training process…” Romero abs. Romero teaches claims 5 and 18. The system of claim 4, wherein the intermediate results set include at least one of: (a) an autoencoder residual; (b) a score; or (Romero p. 4 “The curriculum can be seen as composed of two stages: first learn intermediate concepts via the hint/guided layer transfer, then train the whole student network jointly” Romero p. 4 algorithm 1 “Algorithm 1 FitNet Stage-Wise Training. The algorithm receives as input the trained parameters WT of a teacher, the randomly initialized parameters WS of a FitNet, and two indices h and g corresponding to hint/guided layers, respectively. Let WHint be the teacher's parameters up to the hint layer h.” The guided weights are the score.) (c) a signal importance weight. (The hint weights, Romero p. 4, are the signal importance weights.) Tala teaches claims 6 and 19. The system of claim 4, wherein the training set is a (Tala para 38 “the ML management apparatus 104 may adjust the machine learning system by retraining machine learning models, by gathering more data, by using different machine learning algorithms, and/or the like.”) Tala doesn’t teach synthetic or simulated data. However, Mani teaches a synthetic training data set, generated using a machine learning generative model or a physical simulation. (Mani abs “learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor.”) Tala, Mani and the claims all anticipate different types of training data. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use synthetic training data because it “can help improve the ability of an autonomous vehicle to effectively provide vehicle services to others and support the various members of the community in which the autonomous vehicle is operating…” Mani para 13. Tala teaches claims 7 and 20. The system of claim 4, wherein the generating of the target model includes (Tala para 49 “In various embodiments, each pipeline 202, 204, 206 a-c executes on a distinct or separate device.”) Tala doesn’t teach partitioning. However, Kang teaches partitioning of the target model into components according to resources of a target computing device that the target model is designed to be installed thereon. (Kang abs “automatically partition DNN compu tation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for best latency or best mobile energy.” Kang p. 619-620 “.Partitioning at the back-end provides better performance since the application can minimize the data transfer overhead, while taking advantage of the powerful server to execute the more compute heavy layers at the back-end.”) Tala, Kang and the claims all run their models on target devices. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to partition and run the partitions based on resources of the target device for “best latency or best mobile energy.” Kang abs. Tala teaches claim 12. The system of claim 1, wherein the target model is designed to be installed on a target computing device and wherein the target computing device is Tala doesn’t teach cars. However, Man teaches the target computing device is an in-vehicle computing device. (Mani para 13 “can help improve the ability of an autonomous vehicle to effectively provide vehicle services to others and support the various members of the community in which the autonomous vehicle is operating…”) Tala, Mani and the claims all anticipate different types of training data. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use synthetic training data because it “can help improve the ability of an autonomous vehicle to effectively provide vehicle services to others and support the various members of the community in which the autonomous vehicle is operating…” Mani para 13. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month