Prosecution Insights
Last updated: April 19, 2026
Application No. 16/177,282

NEURAL NETWORK ORCHESTRATION

Non-Final OA §103§112
Filed
Oct 31, 2018
Examiner
WERNER, MARSHALL L
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Veritone Inc.
OA Round
9 (Non-Final)
66%
Grant Probability
Favorable
9-10
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
133 granted / 200 resolved
+11.5% vs TC avg
Strong +44% interview lift
Without
With
+44.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
60 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 200 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the Applicant Response filed 05 January 2026 for application 16/177,282 filed 31 October 2018. Claim(s) 1, 11 is/are currently amended. Claim(s) 1-20 is/are pending. Claim(s) 1-20 is/are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments regarding the 35 U.S.C. 112(a) rejections of claims 1-20 have been fully considered but are moot as they do not address the rejections the 35 U.S.C. 112(a) rejections of claims 1-20 are maintained. Claim Objections Claim(s) 11-20 is/are objected to because of the following informalities: Claim 11, line 25, the executable neural network instance should read “fully-layered executable neural network instance” Claims 12-20 are objected to due to their dependence, either directly or indirectly, on claim 11 Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites wherein the performance score is calculated using an algorithm that assesses accuracy and efficiency enhancements provided by the selected layers, wherein the algorithm jointly evaluates classification accuracy together with at least one measure of computational resource usage, including processor cycle count, memory access overhead, or latency reduction achieved during classification. This limitation is not supported by the original description. While the specification discloses determining a performance score based on accuracy ([0035]-[0036], [0040], [0053]), the specification does not disclose any mention of computational resource usage, cycle count, memory overhead or latency. Therefore, there is no support in the original description for the inclusion of the amendment to claim 1 and claim 1 fails to comply with the written description requirement. Claim 11 recites wherein the performance score is calculated using an algorithm that assesses accuracy and efficiency enhancements provided by the selected layers, the algorithm evaluating classification accuracy together with at least one measure of computational resource usage, including processor cycle count, memory access overhead, or latency reduction achieved during classification. This limitation is not supported by the original description. While the specification discloses determining a performance score based on accuracy ([0035]-[0036], [0040], [0053]), the specification does not disclose any mention of computational resource usage, cycle count, memory overhead or latency. Therefore, there is no support in the original description for the inclusion of the amendment to claim 11 and claim 11 fails to comply with the written description requirement. Claims 2-10, 12-20 are rejected under 35 U.S.C. 112(a) due to their dependence, either directly or indirectly, on claims 1, 11. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 11, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (SkipNet: Learning Dynamic Routing in Convolutional Networks, hereinafter referred to as “Wang”) in view of Kandasamy et al. (Neural Architecture Search with Bayesian Optimization and Optimal Transport, hereinafter referred to as “Kandasamy”). Regarding claim 11 (Currently Amended), Wang teaches a neural network system (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded for a given input) comprising: a trained layer-selection processor-based neural network model distinct from a classification network (Wang, section 4 – teaches performing experiments using pre-existing known computer models and datasets and evaluating computational costs and accuracy [It would be obvious to a person skilled in the art that a processor is needed to perform these experiments and evaluations]) neural network model (Wang, section 3 – teaches layer selection is accomplished using gating networks interposed between layers; Wang, section 3.1 – teaches neural network gating networks) configured to activate a candidate layers selected from one or more neural networks in the ecosystem of pre-trained processor based neural networks (Wang, section 3 – teaches SkipNets are CNNs [pre-trained] in which individual layers are selectively included or excluded for a given input using gating networks) based on one or more attributes of an input file and a computational analysis of these attributes (Wang, section 3 – teaches the gating networks [computational analysis] mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer) to enable construction of a fully-layered executable neural network instance (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded using gating networks; Wang, section 4 – teaches using SkipNets for classification [Fully-layered networks are interpreted as models which take input and produce classification output using layer selection]), …; and a processor coupled to a memory (Wang, section 4 – teaches performing experiments using pre-existing known computer models and datasets and evaluating computational costs and accuracy [It would be obvious to a person skilled in the art that a processor is needed to perform these experiments and evaluations]), the processor configured to classify the input file using the fully-layered executable neural network instance (Wang, section 4 – teaches using method for classification using the SkipNets), wherein the classification employs a dynamic configuration of the selected layers that are activated based on the input file's attributes (Wang, section 4 – teaches using method for classification using the SkipNets), and determine a performance score through computational analysis of classification results, wherein the performance score is calculated using an algorithm that assesses accuracy and efficiency enhancements provided by the selected layers, the algorithm evaluating classification accuracy together with at least one measure of computational resource usage, including processor cycle count, memory access overhead, or latency reduction achieved during classification (Wang, sections 3.2-3.3 – teaches supervised learning which uses objective optimization with reinforcement learning which determines a performance score based on the classification used to update the skip gates; Wang, section 5 - teaches maintaining accuracy with reduced computational costs [reducing processor cycle count, reduced classification latency]), and continuously monitor the performance of each activated layer during classification, dynamically replacing layers that fall below a performance threshold with other layers from the ecosystem (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded for a given input using gating networks mapping the output of the previous layer or group of layers to a binary decision to execute or bypass the next layer [selecting for each input based on previous layer means real-time dynamic construction based on a new input]); wherein the selection and real-time construction of a new neural network reduces computational complexity and resource usage (Wang, section 5 - teaches the selection/construction process is computationally efficient and improves accuracy by specializing and reusing individual components) in neural network-based classification (Wang, section 4 – teaches using method for classification using the SkipNets) by optimizing the selection of layers to match specific characteristics of the input file (Wang, section 1 – teaches optimizing the skipping [selection] policy; Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded for a given input using gating networks based on input characteristics), thereby enhancing classification accuracy and efficiency (Wang, section 5 - teaches the selection/construction process is computationally efficient and improves accuracy by specializing and reusing individual components), wherein, during execution of the executable neural network instance, the processor dynamically reconfigures selected layers in response to per-layer performance metrics (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded for a given input using gating networks mapping the output of the previous layer or group of layers to a binary decision to execute or bypass the next layer [selecting for each input based on previous layer means real-time dynamic construction based on a new input]; Wang, section 4 – teaches using SkipNets for classification [Fully-layered networks are interpreted as models which take input and produce classification output using layer selection]). While Wang teaches applying SkipNets to different models, Wang does not explicitly teach an ecosystem of pre-trained neural networks having a plurality of network architectures. Further, while Wang teaches selecting a plurality of layers, Wang does not explicitly teach the selection performed using Bayesian optimization. Kandasamy teaches an ecosystem of pre-trained processor based neural networks having a plurality of network architectures (Kandasamy, section 4 – teaches starting with a pool of networks); … the layers selected using a Bayesian optimization for selection of layer combinations using a directed search to systematically exclude suboptimal layer combinations based on their performance scores (Kandasamy, section 4 – teaches using a Bayesian optimization algorithm in the selection of layers for a network, which excludes suboptimal layers; see also Kandasamy, sections 2.2, 3) …; and … wherein the classification employs a dynamic configuration of the selected layers that are activated based on the input file's attributes (Kandasamy, section 4 – teaches using a Bayesian optimization algorithm in the selection of layers for a network; see also Kandasamy, sections 2.2, 3), … continuously monitor the performance of each activated layer during classification, dynamically replacing layers that fall below a performance threshold with other layers from the ecosystem (Kandasamy, section 4 – teaches using a Bayesian optimization algorithm in the selection of layers for a network, which excludes suboptimal layers; see also Kandasamy, sections 2.2, 3); … wherein, during execution of the executable neural network instance, the processor dynamically reconfigures selected layers in response to per-layer performance metrics (Kandasamy, section 4 – teaches using a Bayesian optimization algorithm in the selection of layers for a network; see also Kandasamy, sections 2.2, 3). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify Wang with the teachings of Kandasamy in order to improve neural architecture searching in the field of model generation using layers of pre-existing models (Kandasamy, Abstract – “Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.”). Regarding claim 16 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the processor is further configured to analyze the input file to identify one or more attributes of the input file prior to the layer activation processor based neural network model activating the plurality of layers from the ecosystem (Wang, section 3 – teaches the gating networks mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang and Kandasamy for the same reasons as disclosed in claim 11 above. Regarding claim 17 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the layer activation processor based neural network model is trained to match one or more attributes of the input file to features of a pre-trained layer of a processor based neural network in the ecosystem (Wang, section 3 – teaches the gating networks mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer for a pre-trained network; [Choosing to execute the next layer based on attributes from the previous layer is interpreted as matching features to the features of the next layer]). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang and Kandasamy for the same reasons as disclosed in claim 11 above. Regarding claim 18 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the layer activation processor based neural network model is trained to match one or more attributes of the input file to a portion of a layer of a pre-trained processor based neural network in the ecosystem (Wang, section 3 – teaches the gating networks mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer for a pre-trained network; [Choosing to execute the next layer based on attributes from the previous layer is interpreted as matching features to the features of the next layer, including portions thereof]). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang and Kandasamy for the same reasons as disclosed in claim 11 above. Regarding claim 19 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the layer activation processor based neural network model is trained to match one or more attributes of the input file to one or more neurons of a layer of a pre-trained processor based neural network in the ecosystem (Wang, section 3 – teaches the gating networks mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer for a pre-trained network; [Choosing to execute the next layer based on attributes from the previous layer is interpreted as matching features to the features of the next layer, including neurons thereof]). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang and Kandasamy for the same reasons as disclosed in claim 11 above. Regarding claim 20 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein forming the fully-layered ad hoc neural network using the plurality of layers activated from one or more pre-trained processor based neural networks in the ecosystem comprises activating a selected plurality of layers while disabling non-selected layers from the ecosystem of pre-trained processor based neural networks having two or more layers (Wang, section 3 – teaches SkipNets are CNNs [pre-trained] in which individual layers are selectively included or excluded for a given input; see also Wang, Figs. 1-2 – multiple layers), wherein only activated layers can receive or output encoded data in classifying the input file (Wang, section 3 – teaches the gating networks map the output of the previous layer or group of layers to a binary decision to execute or bypass the subsequent layer or group of layers for a pre-trained network; see also Wang, Figs. 1, 2). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang and Kandasamy for the same reasons as disclosed in claim 11 above. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Kandasamy and further in view of Girshick et al. (Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, hereinafter referred to as “Girshick”). Regarding claim 12 (Original), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the fully-layered ad hoc neural network is formed based on one or more attributes of the first object (Wang, section 3 – teaches the gating networks mapping the output of the previous layer or group of layers [feature maps/attributes for a CNN] to a binary decision to execute or bypass the next layer for a given input [first object]). While Wang in view of Kandasamy teaches performing SkipNets for a given input, Wang in view of Kandasamy does not explicitly teach wherein the input file comprises an image having a first and second object. Girshick teaches wherein the input file comprises an image having a first and second object (Girshick, Figure 1 – teaches an image having multiple regions, e.g., a person and a horse), wherein the fully-layered ad hoc neural network is formed based on one or more attributes of the first object (Girshick, section 2 – teaches developing regions for each object in the region and propagating each region/object individually through the CNN to classify the object). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify Wang in view of Kandasamy with the teachings of Girshick in order to develop a simple and scalable image detection and classification model for multiple objects in an image which outperforms existing methods in the field of neural network generation and object classification (Girshick, Abstract – “Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012—achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features...”). Regarding claim 13 (Previously Presented), Wang in view of Kandasamy and further in view of Girshick teaches all of the limitations of the neural network system of claim 12 as noted above. Wang further teaches wherein the layer activation processor based neural network model (Wang, section 3 – teaches layer selection is accomplished using gating networks interposed between layers; Wang, section 3.1 – teaches neural network gating networks) is further configured to form, in real-time, a second fully-layered ad hoc neural network (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded for a given input using gating networks mapping the output of the previous layer or group of layers to a binary decision to execute or bypass the next layer [selecting for each input based on previous layer means real-time construction; Because the layers are selected in real-time based on the input, a second plurality of layers would be selected for a second object]) by activating a set of layers selected from the ecosystem (Wang, section 3 – teaches SkipNets are CNNs [pre-trained] in which individual layers are selectively included or excluded for a given input using gating networks), wherein the second fully-layered ad hoc neural network is fully-layered (Wang, section 3 – teaches SkipNets are CNNs in which individual layers are selectively included or excluded using gating networks; Wang, section 4 – teaches using SkipNets for classification [Fully-layered networks are interpreted as models which take input and produce classification output using layer selection]); and wherein the processor is configured to classify the second object using the second fully-layered ad hoc neural network (Wang, section 4 – teaches using method for classification using the SkipNets; [Because the layers are selected in real-time based on the input, a second plurality of layers would be selected to classify a second object])). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the teachings of Wang, Kandasamy and Girshick for the same reasons as disclosed in claim 12 above. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Kandasamy and further in view of Chou et al. (Unifying and Merging Well-trained Deep Neural Networks for Inference Stage, hereinafter referred to as “Chou”). Regarding claim 14 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. However, Wang in view of Kandasamy does not explicitly teach wherein the ecosystem of pre-trained processor based neural networks comprises multiple neural networks of different network architectures. Chou teaches wherein the ecosystem of pre-trained processor based neural networks comprises multiple neural networks of different network architectures (Chou, section 1 – teaches merging multiple well-trained feed forward networks with differing architectures). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify Wang in view of Kandasamy with the teachings of Chou in order to produce a more compact model to handle original tasks simultaneously while consuming less computational time and storage than the compound model of the original networks in the field of model generation using layers of pre-existing models (Chou, section 1 – “… (2) The proposed method produces a more compact model to handle the original tasks simultaneously. The compact model consumes less computational time and storage than the compound model of the original networks. It has a great potential to be fitted in low-end systems.”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Kandasamy and further in view of Zahavy et al. (Is a Picture Worth a Thousand Words? A Deep Multimodal Architecture for Product Classification in E-Commerce, hereinafter referred to as “Zahavy”). Regarding claim 15 (Previously Presented), Wang in view of Kandasamy teaches all of the limitations of the neural network system of claim 11 as noted above. Wang further teaches wherein the input file is a multimedia file (Wang, section 4 – teaches datasets of images). While Wang in view of Kandasamy teaches that SkipNets can be applied to different models, Wang in view of Kandasamy does not explicitly teach wherein new neural network comprises of layers of different classes of data comprising audio, object, and text. Zahavy teaches wherein the input file is a multimedia file (Zahavy, Experiments section - teaches dataset of products comprising title image and shelf [text/image]), and wherein the fully-layered ad hoc neural network comprises layers of different classes of data comprising audio, object, and text (Zahavy, Methods and Architectures section – teaches multimodal text CNN architecture [text] and VCC network for images [object]; see also Zahavy, Abstract, Multi-Modality section – teaches expanding algorithms to other modalities, including audio). It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify Wang in view of Kandasamy with the teachings of Zahavy in order to improve classification accuracy in the field of neural network generation (Zahavy, Abstract – “Classifying products precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification based on text and image neural network classifiers. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves classification accuracy over both networks on a real-world largescale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce businesses, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc.”). Conclusion Any inquiry concerning this communication or earlier communication from the examiner should be directed to MARSHALL WERNER whose telephone number is (469) 295-9143. The examiner can normally be reached on Monday – Thursday 7:30 AM – 4:30 PM ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at (571) 272-7796. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARSHALL L WERNER/ Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Oct 31, 2018
Application Filed
Mar 10, 2022
Non-Final Rejection — §103, §112
Aug 15, 2022
Response Filed
Sep 01, 2022
Final Rejection — §103, §112
Nov 14, 2022
Response after Non-Final Action
Jan 12, 2023
Request for Continued Examination
Jan 17, 2023
Response after Non-Final Action
Feb 24, 2023
Non-Final Rejection — §103, §112
Jul 31, 2023
Response Filed
Sep 09, 2023
Final Rejection — §103, §112
Nov 14, 2023
Response after Non-Final Action
Dec 14, 2023
Request for Continued Examination
Dec 21, 2023
Response after Non-Final Action
Jan 12, 2024
Non-Final Rejection — §103, §112
Apr 18, 2024
Response Filed
Jul 26, 2024
Final Rejection — §103, §112
Sep 30, 2024
Response after Non-Final Action
Oct 30, 2024
Request for Continued Examination
Nov 12, 2024
Response after Non-Final Action
Jun 23, 2025
Non-Final Rejection — §103, §112
Sep 24, 2025
Response Filed
Oct 15, 2025
Final Rejection — §103, §112
Jan 05, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585968
SYSTEM AND METHOD FOR TESTING MACHINE LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12579111
CROSS-DOMAIN STRUCTURAL MAPPING IN MACHINE LEARNING PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12568890
Apparatus and Method for Controlling a Growth Environment of a Plant
2y 5m to grant Granted Mar 10, 2026
Patent 12554967
USING NEGATIVE EVIDENCE TO PREDICT EVENT DATASETS
2y 5m to grant Granted Feb 17, 2026
Patent 12547918
Stochastic Control with a Quantum Computer
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+44.3%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 200 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month