Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,035

NEURAL NETWORK VERIFICATION FOR NEURAL NETWORK CONTROLLERS

Final Rejection §101§103
Filed
Feb 17, 2023
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Motor Engineering & Manufacturing North America, Inc.
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on January 12, 2026, in which claims 1, 7, 8, 14, 15, and 20 are currently amended. Claims 2, 9, and 16 are canceled. Claims 1, 3-8, 10-15, and 17-20 are currently pending. Response to Arguments Applicant’s arguments with respect to rejection of claims 1, 3-8, 10-15, and 17-20 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive. Applicant’s own summary of the invention on p. 6 of the Remarks submitted 1/12/2026 that the claims are directed towards “a neural network verification system designed to ensure that neural network controllers operate according to predefined temporal specifications” reinforces the interpretation that the claim as a whole is directed towards a mental process of verification and insurance. Even for the sake of argument, if a neural network was narrowly interpreted as solely a computer component (which Examiner does not concede), it is seen as a generic computer component to apply the judicial exceptions of verification of insurance, or “identify”, “reformulate”, and “generate” as claimed. In other words, the claims are not seen as improving the recited generic neural networks but rather using them to apply a judicial exception. Because of this the claim is seen as relying on the judicial exception to provide a technical improvement (See MPEP 2106.05(a) "It is important to note, the judicial exception alone cannot provide the improvement." MPEP 2106.05(a) also recites "An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.” And finally MPEP 2106.07(a)(II) "employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application"). For at least these reasons and those further detailed below Examiner asserts that it is reasonable and appropriate to maintain the rejection under 35 U.S.C. 101. Applicant’s arguments with respect to rejection of claims 1, 3-8, 10-15, and 17-20 under 35 U.S.C. 103 based on amendment have been considered, however, are not persuasive. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. Claim Rejections - 35 USC § 101 101 Rejection 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10-15, and 17-20 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. Regarding Claim 1: Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to a system, which is directed to a product, one of the statutory categories. Step 2A Prong One Analysis: Claim 1 under its broadest reasonable interpretation is a series of mental processes. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following: identify temporal logic that is associated with a controller of a physical system simulation (observation, evaluation, and judgement), generate a second neural network based on the tree representation, the second neural network being different from the first neural network (observation, evaluation, and judgement. A single layer perceptron with linear activation (which is a neural network) can be fully represented by the formula y=mx+b which can readily be generated entirely in the mind with or without the assistance of tools such as pen and paper) Generate, with the second neural network, a robustness metric of the first neural network (observation, evaluation, and judgement) reformulate the temporal logic as a tree representation, wherein the tree representation corresponds to a maximum and minimum number of computations associated with the temporal logic; (observation, evaluation, and judgement) Therefore, claim 1 recites an abstract idea which is a judicial exception. Step 2A Prong Two Analysis: Claim 1 recites additional elements “A testing system, comprising: at least one processor; and at least one memory having a set of instructions, which when executed by the at least one processor, causes the testing system to”, “wherein the controller is a first neural network”, and “train the first neural network until a threshold is met” (which just describes generic neural network training which is required by generic neural networks). However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application (See MPEP 2106.05(f)). Examiner also notes that although generating and using a second neural network is interpreted as a mental process, even if for the sake of argument it was interpreted as more than a mental process (which Examiner does not concede) the claim limitation at best amounts to mere instructions to apply the judicial exception using generic computer components (the generic second neural network). Therefore, claim 1 is directed to a judicial exception. Step 2B Analysis: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 1 amount to no more than mere instructions to apply the judicial exception using a generic computer component. For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 8 and 15, which recite a system and a method, respectively, as well as to dependent claims 2-7, 9-14, and 16-20. The additional limitations of the dependent claims are addressed briefly below: Dependent claims 3, 10, and 17 recite additional insignificant extra-solution activity of gathering data (See MPEP 2106.05(g)) “receive a first output from the first neural network.” which is well-understood, routine, and conventional in the art (See MPEP 2106.05(d)(II)(i)) as well as additional observation, evaluation, and judgement “generate the robustness metric based on the first output from the first neural network” Dependent claims 4, 11, and 18 recite additional insignificant extra-solution activity “learner is a convolutional neural network” which amounts to selection of a data type. Dependent claims 5, 12, and 19 recite additional observation, evaluation, and judgement “the controller is a continuous time feedback control system”. Dependent claims 6, 13, and 20 recites additional insignificant extra-solution activity “verify if the controller satisfies the temporal logic and if the controller does not satisfy the temporal logic, execute a Lipschitz constant analysis to verify whether the controller implements the temporal logic” Dependent claims 7, 14, and 20 recite additional observation, evaluation, and judgement “the temporal logic defines one or more of task objectives or safety constraints” as well as additional instructions to apply the judicial exception using generic computer components “the first neural network is a deep neural network and the second neural network is a rectified linear activation function Feed Forward Neural Network”. Therefore, when considering the elements separately and in combination, they do not add significantly more to the inventive concept. Accordingly, claims 1, 3-8, 10-15, and 17-20 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-5, 8, 10-12, 15 and 17-19 are rejected under U.S.C. §102(a)(1) as being anticipated by Gonzalez (US20200379893A1). Regarding claim 1, Gonzalez teaches A testing system, comprising: at least one processor; and at least one memory having a set of instructions, which when executed by the at least one processor, causes the testing system to:([¶0006] "the present disclosure is directed to an apparatus for synthesizing parameters for control of a closed loop system based on a differentiable simulation model of the closed loop system. The apparatus having a memory and one or more processors coupled to the memory." [¶0007] "The program code is executed by a processor and includes program code to determine requirements/specifications for the closed loop system in signal temporal logic (STL)") identify temporal logic that is associated with a controller of a physical system simulation, ([¶0006] "The processor(s) is configured to determine requirements/specifications for the closed loop system in signal temporal logic (STL)" [¶0024] "a simulation-driven framework to automatically synthesize parameters of control software, including but not limited to cars, airplanes, or robots, such that the overall system satisfies specifications given in signal temporal logic (STL)") wherein the controller is a first neural network;([¶0026] "The parametric control law may be based on one of a variety of parametric templates including, […] neural networks") reformulate the temporal logic as a tree representation, wherein the tree representation corresponds to a maximum and minimum number of computations associated with the temporal logic;([¶0084] "STL can be represented using a parse tree where each node represents an operation" [¶0108] "This topological ordering of G given ϕ is precisely governed by Oϕ, the post-order traversal of the parse tree generated by ϕ." [¶0126] "the robustness trace can be computed for each term in the robustness formula and the appropriate max and min functions can be taken to obtain the robustness trace for the Until and Then operations. The outputs of the temporal graph are the elements of the robustness trace but in reverse" Gonzales explicitly reformulates STL into a parse tree representation that governs evaluation ordering for the constructed robustness computation graph. Gonzales also ties parse tree traversal into min/max robustness calculation formulas, where the trees operator structure (nodes as operators) corresponds to the number of computations) generate a second neural network based on the tree representation, the second neural network being different from the first neural network; ([¶0112] "Computing the robustness of these operators rely only on elementary operations, so constructing Gφ (i) is straight-forward. To compute the robustness trace (e.g., construct the graph Gφ), Gφ (i) is repeated over the timed trace. Inspired by recurrent neural networks and their ability to effectively process sequential data, a recurrent computation graph model is used to compute the robustness, and robustness trace of the ⋄ (eventually) and □ (always) operators. This structure can be leveraged and extended to compute the U (Until) and T (Then) operator." [¶0137] "using G and the differentiable approximations. A built-in auto-differentiation functionality in many machine learning (ML) toolboxes can be used to backpropagate on the computation graph" Gonzales explicitly states that computation graph G is a trained recurrent computation graph propagation network inspired by recurrent neural networks such that G is interpreted as the second neural network) generate, with the second neural network, a robustness metric of the first neural network based on at least one trajectory received from the first neural network ([¶0112] "Computing the robustness of these operators rely only on elementary operations, so constructing Gφ (i) is straight-forward. To compute the robustness trace (e.g., construct the graph Gφ), Gφ (i) is repeated over the timed trace. Inspired by recurrent neural networks and their ability to effectively process sequential data, a recurrent computation graph model is used to compute the robustness, and robustness trace of the ⋄ (eventually) and □ (always) operators. This structure can be leveraged and extended to compute the U (Until) and T (Then) operator." [¶0108] "This topological ordering of G given ϕ is precisely governed by Oϕ, the post-order traversal of the parse tree generated by ϕ." Gonzales explicitly uses second neural network G for generating a robustness metric and based on the trajectory (topological ordering governed by first neural network phi). See also FIG. 5C which is visually indistinguishable from a neural network.) and train the first neural network until a threshold is met.([¶0137] "using G and the differentiable approximations. A built-in auto-differentiation functionality in many machine learning (ML) toolboxes can be used to backpropagate on the computation graph" [¶0061] "the robustness of the STL requirements are backpropagated to update the controller parameters" [¶0062] "At block 412, a dynamic constraint solver (e.g., dReach) is used to provide a formal proof that the resulting controller satisfies its specification. At block 414, it is determined whether the proof succeeds. If the proof succeeds, the process is terminated. Otherwise, the process continues to block 406 to continue training the controller. For example, dReach provides an example of a violation, which is used as an example simulation to continue training the controller."). Regarding claim 3, Gonzalez teaches The testing system of claim 1, wherein to generate, with the second neural network, the robustness metric, the instructions of the at least one memory, when executed, cause the testing system to: receive a first output from the first neural network; and generate the robustness metric based on the first output from the first neural network.(See FIG. 4A In Gonzalez the neural network output is control signals which drive the physical simulation. The plant trajectory is evaluated against the STL specification to compute robustness which is then backpropagated through the differentiable computational graph into the neural network parameters.). Regarding claim 4, Gonzalez teaches The testing system of claim 3, wherein the instructions of the at least one memory, when executed, cause the testing system to: control the physical system simulation with the first neural network.([¶0026] "The parametric control law may be based on one of a variety of parametric templates including, […] neural networks" [¶0044] "The inputs 206 to the closed loop system model 200 are exogenous inputs to the plant model 202 (such as ambient temperature, atmospheric pressure, driver input, pilot commands, etc.), and outputs of the plant model 202 generally include controlled signals of the plant model 202. In general, the closed loop model 200 also has a number of parameters including initial conditions of various state-carrying elements in the model. This includes initial values for memory elements in the controller model 204 and the initial configuration for the physical elements in the plant model 202"). Regarding claim 5, Gonzalez teaches The testing system of claim 1, wherein the controller is a continuous time feedback control system.([¶0027] "the differential simulation model is a continuous-time model" [¶0059] "it is assumed that the model of the physical component (e.g., the plant 416 and the controller illustrated in FIG. 4B) is available as a differentiable model. This is possible if either the plant model is a continuous-time model, x′=f(x, u) (as shown in FIG. 4B), or if it is given as a discrete update equation that is nonetheless differentiable."). Regarding claims 8 and 10-12, claims 8 and 10-12 are substantially similar to claims 1 and 3-5. Therefore, the rejections to claims 1 and 3-5 also apply to claims 8 and 10-12. Regarding claims 15 and 17-19, claims 15 and 17-19 are directed towards the method performed by claims 1 and 3-5. Therefore, the rejections applied to claims 1 and 3-5 also apply to claims 15 and 17-19. Claims 6 and 13 are rejected under U.S.C. §103 as being unpatentable over the combination of Gonzalez and Akella (“Disturbance Bounds for Signal Temporal Logic Task Satisfaction: A Dynamics Perspective”, 2018). Regarding claim 6, Gonzalez teaches The testing system of claim 1, wherein the instructions of the at least one memory, when executed, cause the testing system to: verify if the controller satisfies the temporal logic; and ([¶0023] "By using computation graphs, state-of-the-art machine learning tools are leveraged to create an efficient framework for evaluating the robustness of STL formulas." See also FIG. 4A. The robustness is a verification metric to verify the controller (the neural network) against the STL specification (temporal logic).). However, Gonzalez does not explicitly teach if the controller does not satisfy the temporal logic, execute a Lipschitz constant analysis to verify whether the controller implements the temporal logic. Akella, in the same field of endeavor, teaches if the controller does not satisfy the temporal logic, execute a Lipschitz constant analysis to verify whether the controller implements the temporal logic.([Abstract] "When these disturbances enter the dynamics linearly, however, our work determines a two-norm disturbance-bound rejectable by a system’s controller without requiring specific knowledge of these disturbances beforehand" [p. 1 §1] "Our contribution is twofold. First, we construct two optimization problems that each generate two-norm disturbance-bounds rejectable by a system’s controller while it steers its system to satisfy its specification. Each optimization problem focuses on a specific subset of Signal Temporal Logic, and we use their solutions to construct our system-level bound. Secondly, we show that our generated bound is accurate albeit conservative, as it depends on Lipschitz constants for the system dynamics and specification"). Gonzalez as well as Akella are directed towards improving robustness of STL controllers. Therefore, Gonzalez as well as Akella are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Gonzalez with the teachings of Akella by using a Lipschitz constant analysis to quantify and certify the maximum allowable disturbance under which the STL controller satisfies the specification. Akella provides as additional motivation for combination ([Abstract] “determination of such a disturbance bound offers a better understanding of the robustness with which a given controller achieves a specified task”). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 13, claim 13 is substantially similar to claim 6. Therefore, the rejection applied to claim 6 also applies to claim 13. Claims 7 and 14 are rejected under U.S.C. §103 as being unpatentable over the combination of Gonzalez and Ghosh (US11651227B2). Regarding claim 7, Gonzalez teaches The testing system of claim 1, wherein: the temporal logic defines one or more of task objectives or safety constraints; ([¶0044] "The inputs 206 to the closed loop system model 200 are exogenous inputs to the plant model 202 (such as ambient temperature, atmospheric pressure, driver input, pilot commands, etc.), and outputs of the plant model 202 generally include controlled signals of the plant model 202. In general, the closed loop model 200 also has a number of parameters including initial conditions of various state-carrying elements in the model. This includes initial values for memory elements in the controller model 204 and the initial configuration for the physical elements in the plant model 202"). However, Gonzalez does not explicitly teach the first neural network is a deep neural network the second neural network is a rectified linear activation function Feed Forward Neural Network.. Ghosh, in the same field of endeavor, teaches the first neural network is a deep neural network([Col. 10 l. 16-3] "CNN+DNN 242 may be made up of one or more ReLu-activated convolutional layers and one or more ReLu-activated dense layers" CNN and DNN with dense layers interpreted as feed forward neural networks) the second neural network is a rectified linear activation function Feed Forward Neural Network.([Col. 10 l. 16-3] "CNN+DNN 242 may be made up of one or more ReLu-activated convolutional layers and one or more ReLu-activated dense layers" CNN and DNN with dense layers interpreted as feed forward neural networks). Gonzalez as well as Ghosh are directed towards using neural networks as a controller for STL systems. Therefore, Gonzalez as well as Ghosh are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Gonzalez with the teachings of Ghosh by using a deep feedforward neural network (DNN) with Relu activation. While one of ordinary skill in the art would recognize that neural networks are predominantly feedforward and Relu activations are very common, Ghosh provides as additional motivation for combination ([Col. 4 l. 9-24] "the deep neural network model can satisfy safety constraints or other constraints inherent to the system domain for a system that uses the deep neural network model. Improving the likelihood that constraints will be satisfied by the model, in operation, may provide certain guarantees of operations by the system resulting in an improved likelihood of reliable and trusted operation."). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 14, claim 14 is substantially similar to claim 7. Therefore, the rejection applied to claim 7 also applies to claim 14. Claim 20 is rejected under U.S.C. §103 as being unpatentable over the combination of Gonzalez and Akella and Ghosh. Regarding claim 20, Gonzalez teaches The method of claim 15, further comprising: verifying if the controller satisfies the temporal logic; and(Gonzalez [¶0023] "By using computation graphs, state-of-the-art machine learning tools are leveraged to create an efficient framework for evaluating the robustness of STL formulas." See also FIG. 4A. The robustness is a verification metric to verify the controller (the neural network) against the STL specification (temporal logic).) wherein the temporal logic defines one or more of task objectives or safety constraints; and(Gonzalez [¶0044] "The inputs 206 to the closed loop system model 200 are exogenous inputs to the plant model 202 (such as ambient temperature, atmospheric pressure, driver input, pilot commands, etc.), and outputs of the plant model 202 generally include controlled signals of the plant model 202. In general, the closed loop model 200 also has a number of parameters including initial conditions of various state-carrying elements in the model. This includes initial values for memory elements in the controller model 204 and the initial configuration for the physical elements in the plant model 202"). However, Gonzalez doesn't explicitly teach if the controller does not satisfy the temporal logic, executing a Lipschitz constant analysis to verify whether the controller implements the temporal logic. further wherein the second neural network is a rectified linear activation function Feed Forward Neural Network. Akella, in the same field of endeavor, teaches if the controller does not satisfy the temporal logic, executing a Lipschitz constant analysis to verify whether the controller implements the temporal logic,([Abstract] "When these disturbances enter the dynamics linearly, however, our work determines a two-norm disturbance-bound rejectable by a system’s controller without requiring specific knowledge of these disturbances beforehand" [p. 1 §1] "Our contribution is twofold. First, we construct two optimization problems that each generate two-norm disturbance-bounds rejectable by a system’s controller while it steers its system to satisfy its specification. Each optimization problem focuses on a specific subset of Signal Temporal Logic, and we use their solutions to construct our system-level bound. Secondly, we show that our generated bound is accurate albeit conservative, as it depends on Lipschitz constants for the system dynamics and specification"). Gonzalez as well as Akella are directed towards improving robustness of STL controllers. Therefore, Gonzalez as well as Akella are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Gonzalez with the teachings of Akella by using a Lipschitz constant analysis to quantify and certify the maximum allowable disturbance under which the STL controller satisfies the specification. Akella provides as additional motivation for combination ([Abstract] “determination of such a disturbance bound offers a better understanding of the robustness with which a given controller achieves a specified task”). This motivation for combination also applies to the remaining claims which depend on this combination. However, the combination of Gonzales and Akella doesn’t explicitly teach wherein the first neural network is a Deep Neural Network and wherein the second neural network is a rectified linear activation function Feed Forward Neural Network. Ghosh, in the same field of endeavor, teaches wherein the first neural network is a Deep Neural Network and wherein the second neural network is a rectified linear activation function Feed Forward Neural Network. ([Col. 10 l. 16-3] "CNN+DNN 242 may be made up of one or more ReLu-activated convolutional layers and one or more ReLu-activated dense layers" CNN and DNN with dense layers interpreted as feed forward neural networks). The combination of Gonzalez and Akella as well as Ghosh are directed towards using neural networks as a controller for STL systems. Therefore, the combination of Gonzalez and Akella as well as Ghosh are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Gonzalez and Akella with the teachings of Ghosh by using a feedforward neural network with Relu activation. While one of ordinary skill in the art would recognize that neural networks are predominantly feedforward and Relu activations are very common, Ghosh provides as additional motivation for combination ([Col. 4 l. 9-24] "the deep neural network model can satisfy safety constraints or other constraints inherent to the system domain for a system that uses the deep neural network model. Improving the likelihood that constraints will be satisfied by the model, in operation, may provide certain guarantees of operations by the system resulting in an improved likelihood of reliable and trusted operation."). This motivation for combination also applies to the remaining claims which depend on this combination. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Oct 03, 2025
Non-Final Rejection — §101, §103
Jan 07, 2026
Examiner Interview Summary
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 12, 2026
Response Filed
Feb 25, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month