Prosecution Insights
Last updated: April 19, 2026
Application No. 16/288,975

EFFICIENT AND SECURE GRADIENT-FREE BLACK BOX OPTIMIZATION

Final Rejection §101§103
Filed
Feb 28, 2019
Examiner
GERMICK, JOHNATHAN R
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
43 granted / 91 resolved
-7.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 91 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is in response to the claims filed 01/26/2026. Claims 1, 6-8, 13-14 and 19 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 01/26/2026 have been fully considered but they are not persuasive. With respect to the 101 rejections: Applicant argues the claims do not recite an abstract idea. Applicant appears to argue the limitations only involve a mathematical concept. Further, Applicant argues the claim limitation are not directed to mental processes. Examiner disagrees. In the updated rejection many of the limitations of the claim recite mathematical concepts. As noted in the rejection, while no equations are recited, when read in light of the specification in particularly specification 0022-0025 these limitations describe particular mathematical operations. For example, the gradient estimator is implemented according to the equation of paragraph 0022. Paragraphs 0023 and 0024 describe the equations requisite for gradient blending and quantization. Paragraph 0025 describes the equation used to optimize and output of a black box. Applicant argues the care integrate the judicial exception because the claims describe an efficient gradient free optimization approach. Examiner disagrees. While the claim may generally be related to such optimization, the claim does not include additional elements to integrate the improvement. As noted in the rejection the claim merely includes generic computer technology such as “processors” to implement the recited abstract ideas. The improvements can not be reflected in the abstract ideas alone. Applicant argues the claims add a limitation other than what is well-understood routine and conventional. Examiner disagrees. The well understood routine and conventional (WURC) consideration overlaps with the insignificant extra solution analysis in step 2A prong 2. Examiner has not suggested any of the additional elements are insignificant extra solution activity nor well understood routine and conventional. Principally, the claims only include additional elements which provide the generic computer technology for performing the recited abstract ideas (MPEP 2106.05(f)). Given the claim only recites additional elements amounting to “apply it” under 2106.05(f) it is concluded that the limitations are not indicative of an inventive concept nor significantly more. It is important to note the WURC analysis is not pertinent to the recited abstract ideas, rather it is a consideration applied to elements or combination of elements which were considered insignificant extra solution activities in Step 2A prong 2. Therefore, the rejection is maintained. With respect to the art rejections: Applicant points out that Duchi/Seide/Duchi2/Wang does not teach the cited limitation “using a central difference of the function values and using signs of estimates with a majority vote”. Examiner notes this deficiency is addressed in the updated rejection. Shamir and Bernstein are relied upon for teaching this limitation in the rejection made in view of the combination of Duchi, Seide, Duchi2, Wang, Prashanth, Shamir and Bernstein. Applicant continues noting that none of the references including in particular Gao cure the deficiencies. Examiner disagrees. As noted previously, Shamir and Bernstein are relied upon for teaching this limitation. Shamir describes using a central difference of the function values, while Bernstein describes using a majority vote. Claim Objections Claims 1, 8 and 14 are objected to because of the following informalities: The claim recites “an output of a block box” this appears to be a typo and should read “an output of a black box” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6-8, 13, 14 and 19 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more. Regarding Claim 1, 8 and 14 Under step 1, claim 1 is directed to a computer-implemented method, which is directed to a process, one of the statutory categories. Claim 8 is directed to a computer program product, which is directed to a machine, one of the statutory categories. Claim 14 is directed to a computer system, which is directed to a machine, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mathematical calculations (“implementing… an average gradient estimator, using mini-batch samples both with and without replacement in addition to random direction sampling, using a forward difference of function values at multiple random directions, the average gradient estimator implemented under a condition of a unimodal symmetric gradient noise the average gradient estimator includes using different types of gradient estimators using a central difference of the function values and using signs of estimates with a majority vote: measuring, by one or more processors, a gap between output of the average gradient estimator and a true gradient via a smoothing function performing variance reduction via gradient blending with an output of the average gradient estimator using a control variate; and performing binary quantization of a result of the variance reduction optimizing… an output of a block box by performing gradient descent optimization on a result of the binary quantization”) The specification paragraphs 022-025 describe each of the “implementing…” and “performing” as achieved via various mathematical equations. These sections of the specification describe the mathematical application which implement each of the claim limitations. Therefore, the claim recites and abstract idea. Under step 2A Prong 2, The claim recites the following additional element(s): From claim 1: by one or more processors (which amounts to descriptions which makes use of or applies the abstract idea because under 2106.05(f)(1) From claim 8: computer program product comprising one or more computer readable storage medium and program instructions stored on the one or more computer readable storage media to perform operations comprising: (which amounts to descriptions which makes use of or applies the abstract idea because under 2106.05(f)(1) From claim 14: The computer system comprising a processor set, one or more computer readable storage media and program instructions stored on the or more computer readable storage media to cause the processor set to perform operations: (which amounts to descriptions which makes use of or applies the abstract idea because under 2106.05(f)(1) Therefore, the claim is directed to a judicial exception. Under step 2B, the recited additional elements when considered alone or in combination neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Regarding Claim 6/13/19 The claim is dependent upon claim 1/8/14. The claim introduces additional abstract ideas “evaluating an error propagation from a sign of the output of the average gradient estimator to the true gradient.”. Further, each of these limitations can be considered reciting mental evaluations because they can be performed in the mind. An evaluation of error is a mental evaluation performed in the mind. Under Step 2A Prong 1, these limitations correspond to a mental evaluations and/or mathematical evaluations. The claim does not recite any more additional elements beyond those identified in the parent claim. These additional elements do not integrate the abstract idea into a practical application nor provide significantly more. Regarding Claim 7 The claim is dependent upon claim 1. The claim recites the following additional element(s), in addition to those already identified in the parent claim: (“embodied in a cloud-computing environment”) that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). Such additional elements do not integrate the abstract idea into a practical application nor provides significantly more than the abstract idea itself. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-8, 13-14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Duchi et al. “Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations”, hereinafter Duchi. Further in view of Seide et al. US Document ID US 20170308789 A1, hereinafter Seide. Further still in view of Duchi et al. “Randomized Smoothing for Stochastic Optimization”, hereinafter Duchi2. Further in view of Wang “Computationally Feasible Near-Optimal Subset Selection for Linear Regression under Measurement Constraints”. Further in view of Ohad Shamir “An optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback” hereinafter Shamir. Further in view of Prashanth “Adaptive System Optimization Using Random Directions Stochastic Approximation”. Further in view of Bernstein et al. “SIGNSGD: Compressed Opimisation for Non-Convex Problems” hereinafter Bernstein. Regarding Independent Claims 1 Duchi teaches, implementing… an average gradient estimator using a forward difference of function values at multiple random directions: (Section II A ¶01 “Given a smoothing constant u, vector z, and observation x, we define the directional gradient estimate at the point θ as: PNG media_image1.png 62 382 media_image1.png Greyscale ”, Pg 4 ¶005 “We may also use multiple independent random samples Zt,i , i = 1, 2,..., m, in the construction of the gradient estimator (6) to obtain more accurate estimates of the gradient via PNG media_image2.png 32 316 media_image2.png Greyscale ” Examiner notes that the gradient estimator can be formulated such that it is the average of m random samples. The equation Gsm corresponds to a forward difference of function values because it is the difference between the forward evaluation of the function and the current., Pg 2 ¶01 “for small non-zero scalar u and a vector Z ∈ Rd , the quantity (F(θ +uZ; x)−F(θ ; x))/u approximates a directional derivative of F(θ ; x)” The random sample of the Z vector corresponds to random directions in the gradient space formalized as points on a scalable sphere.) wherein the average gradient estimator includes; using different types of gradient estimators (Abstract “We consider derivative-free algorithms for stochastic and nonstochastic convex optimization problems that use only function values rather than gradients” Examiner notes that Duchi presents a plurality of gradient estimators) Duchi does not explicitly teach, by one or more processors, … using (i) mini- batch samples both with and without replacement, in addition to random direction sampling and … the average gradient estimator is implemented under a condition of a unimodal symmetric gradient noise; and…using a central difference of the function values and using signs of estimates with a majority vote:…measuring, by one or more processors, a gap between output of the average gradient estimator and a true gradient via a smoothing function …performing, by one or more processors, variance reduction via gradient blending with the output of the average gradient estimator using a control variate; … performing, by one or more processors, binary quantization of a result of the variance reduction; and …optimizing, by one or more processors, an output of a block box by performing gradient descent optimization on a result of the binary quantization. Seide however, when addressing issues related to gradient updates in optimization problems, teaches, A computer-implemented method … by one or more processors (¶0004 “This disclosure describes systems, methods, and computer-readable media for mathematically optimizing solutions to computational models” (¶0052 “In various example, the processing unit(s) 306 can access the module(s) on the computer-readable media 312”) performing, by one or more processors, variance reduction via gradient blending with the output of the average gradient estimator using a control variate; (¶0040 “Quantization 226 includes tracking “error values,” values representing the difference between gradient values and their quantized representations, and determining quantization values based partly on the error values [control variate]. This advantageously permits maintaining the accuracy of the training process by spreading [blending] quantization error over successive gradient values” Examiner notes, by definition control variates are variance reduction techniques that use information about error estimates to reduce error of another estimate. Spreading the gradient quantization error over successive gradients corresponds to “gradient blending” the output of the gradient estimator using a control variate.) performing, by one or more processors, binary quantization of a result of the variance reduction (¶0040 “Quantization 226 can include transmitting representations (e.g., approximations) of the gradient values from one node, the representations using fewer bits than the gradient values, e.g., fewer than 32 bits, e.g., 1 bit [binary]” Examiner notes that Equation (10) and (11) PNG media_image3.png 365 1162 media_image3.png Greyscale , show that the control variate, Δ, is combined with the gradient, wherein the resulting variance reduction is quantized by Q(). The updated representations.) optimizing, by one or more processors, an output of a block box by performing gradient descent optimization on a result of the binary quantization (¶0057 “The update-determining module is configured to determine the modification values [to optimize] using a stochastic gradient descent algorithm. Updating module 328 modifies the stored computational model, e.g., of DNN [black box] 304 based on the gradients… The quantization module 324 and the transferring module 330 cooperate to provide the determined gradients [results of binary quantization] as needed to the nodes” Examiner notes that the binary quantization values correspond to the quantized gradients discussed previously.) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a method for calculating gradients for optimization problems using derivative-free gradients with binary quantization and variance reduction via blending prior iteration quantization errors with the present quantization calculation as taught by Seide to the disclosed invention of Duchi. One of ordinary skill in the arts would have been motivated to make this modification because “This permits model updates to be effectively computed in a data-parallel manner across a large number of nodes. This in turn reduces the elapsed time required to train the DNN” (Seide ¶0042). Duchi/Seide does not explicitly teach, using (i) mini- batch samples both with and without replacement, in addition to random direction sampling and …the average gradient estimator is implemented under a condition of a unimodal symmetric gradient noise; and …using a central difference of the function values and using signs of estimates with a majority vote:…measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Duchi2 however, when addressing issues related to stochastic optimization for non-smooth gradient estimations teaches, the average gradient estimator under a condition of a unimodal symmetric gradient noise (Section 2.2 “Our algorithm is based on observations of stochastically perturbed gradient information at each iteration… it performs the following three steps… (1) Draws random variables {Zi,t} m i=1 in an i.i.d. manner according to the distribution µ… (2) Queries the oracle at the m points yt + utZi,t… (3) Computes the average gt” Examiner notes, that in this average gradient estimator, gt, is computed based on the random sampling from a distribution µ. Wherein the samples added from the distribution adds noise to the calculation of the gradient estimator. Section 2.4 “we show concrete convergence bounds for algorithms using different choices of the smoothing distribution µ… Corollary 2. Let µ be the d-dimensional normal distribution with zero-mean and identity covariance I [or variance of 1 for all dimensions in d]” Examiner notes that one such distribution defined in d-dimensional space is a standard normal distribution. By definition a normal distribution with zero mean and a variance of 1 corresponds to a unimodal symmetric distribution. Therefor the smoothing distribution µ sampled by the gradient calculation corresponds to a distribution characterized by unimodal symmetric gradient noise.) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a unimodal symmetric distribution to define the random perturbations of an average gradient estimator for non-smooth optimization problems as taught by Duchi2 to the disclosed invention of Duchi/Seide. One of ordinary skill in the arts would have been motivated to make this modification it is advantageous to sample using normal random variables rather than a uniform distribution because “no normalization of Z is necessary… The lack of normalization is a useful property in very high dimensional scenarios, such as statistical natural language processing… it is much easier to sample from B∞(0, u) in high dimensional settings, especially sparse data scenarios such as NLP where only a few coordinates of the random variable Z are needed.” (Section 2.4 ¶04 Duchi2) Duchi/Seide/Duchi2 does not explicitly teach, using (i) mini- batch samples both with and without replacement, in addition to random direction sampling and…using a central difference of the function values and using signs of estimates with a majority vote:…measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Wang however when addressing minibatch sampling replacement teaches, using mini batch samples both with and without replacement ( pg 6 and 7 Section 3.2 “We introduce our first post-processing procedure as a subsampling algorithm, with pseudocode displayed in Algorithm 1. The algorithm is as simple as one can hope for: after obtaining the optimal solution π ∗ of the continuous optimization problems, which by formulation represents a probabilistic distribution over the n design points (rows in X), the algorithm samples with or without replacement (depending on the model settings)”) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a sampling with and without replacement to the disclosed invention of Duchi/Duchi2/Seide. One of ordinary skill in the arts would have been motivated to make this modification it is advantageous to sample using either replacement style for data retrieval because as noted in Wang because “Compared to Theorem 3.1, it can be seen that Algorithm 1 achieves near-optimal statistical performance in terms of estimation” (Wang pg 7) Duchi/Seide/Duchi2/Wang does not explicitly teach, in addition to random direction sampling …using a central difference of the function values and using signs of estimates with a majority vote…measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Prashanth however, when random direction sampling for gradient estimation teaches, in addition to random direction sampling. (pg 1 abstract “We present new algorithms for simulation optimization using random directions stochastic approximation (RDSA). These include first-order (gradient) as well as second-order (Newton) schemes.”) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a random direction stochastic approximation to the disclosed invention of Duchi/Duchi2/Seide/Wang. One of ordinary skill in the arts would have been motivated to make this modification of the gradient estimation because as noted by Prashanth “the scheme, also called finite difference stochastic approximation (FDSA), disadvantageous for large parameter dimensions. The random directions stochastic approximation (RDSA) approach…alleviates this problem by requiring two system simulations regardless of the parameter dimension” ( pg 1 Prashanth) Duchi/Seide/Duchi2/Wang/Prashanth does not explicitly teach, …using a central difference of the function values and using signs of estimates with a majority vote…measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Shamir however, when addressing issues related to zero order convex optimization teaches, using a central difference of the function values.( Pg 5 PNG media_image4.png 290 904 media_image4.png Greyscale Examiner notes that the gradient update here is based on a difference between two “bandit” random steps on the Euclidean sphere of the objective function. In numerical differentiation this is known as central difference approximation. Further, examiner notes that “using” is interpreted to mean including a central difference of the function values. ) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a gradient estimator embodied by a finite central difference function evaluation as taught by Shamir to the disclosed invention of Duchi/Seide/Duchi2/Wang/Prashanth. One of ordinary skill in the arts would have been motivated to make this modification because “the algorithm and analysis are simpler, and readily extend to non-Euclidean problems. The algorithm is based on a small but surprisingly powerful modification of the gradient estimator” (Shamir Abstract). Duchi/Seide/Duchi2/Wang/Prashanth/Shamir does not explicitly teach, … and using signs of estimates with a majority vote:…measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Bernstein however, when addressing issues related to binary compression of gradient updates, teaches, and using signs of estimates with a majority vote (Pg6 ¶04 “This is called majority vote, since each worker is essentially voting with its belief about the sign of the true gradient. The parameter server counts the votes, and sends its 1-bit decision back to every worker.” Examiner notes that the majority gradient sign sent back to the workers is then used for the future updates.) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate the majority voting scheme for binary gradient updates as taught by Bernstein to the disclosed invention of Duchi/Seide/Duchi2/Wang/Prashanth/Shamir. One of ordinary skill in the arts would have been motivated to make this modification because the “majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence” (Bernstein Abstract). Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein does not explicitly teach, measuring, …, a gap between output of the average gradient estimator and a true gradient via a smoothing function Gao however, when addressing issues related to stochastic approximation of objective function for nonconvex optimization teaches, measuring…, a gap between output of the average gradient estimator and a true gradient via a smoothing function . (Pg 17 “Now based on (52) we define the zeroth-order stochastic gradient of f at point x k : PNG media_image5.png 166 1032 media_image5.png Greyscale PNG media_image6.png 332 1046 media_image6.png Greyscale ” Examiner notes that Gao presents an average gradient estimator via a smoothing function, that is utilized for the GADM algorithm. Pg 23 “Since the size of the problem is small, this allows us to compare our solution with the solution obtained from CVX. The test results can be found in Table 2, where “GADM” represents the objective value we discussed above with “time” being the CPU time (in seconds) of GADM, and “CVX” represents the objective value returned by CV” Examiner notes that the output of both algorithms, one that uses an estimator and the “true” non-averaged CVX algorithm are compared in Table 2. Further Examiner notes that the smoothing function itself as described in the instant specification is an embodiment of the gradient estimator ¶035-036, in which, by virtue of its formulation there is a “gap” between the estimator and a “true” non-estimated gradient.) It would have been obvious for one of ordinary skill in the arts before the effective filling date of the claimed invention to incorporate a stochastic zeroth-order smoothing scheme that convolves a random density function with the function in question by Gao to the disclosed invention of Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein. One of ordinary skill in the arts would have been motivated to make this modification to compute gradients “where only noisy estimations of the gradient or the function values are accessible, yet the flexibility is achieved without sacrificing the computational complexity bounds.” (Gao Abstract). Regarding Claim 6 Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein teach the method of Claim 1. Gao teaches, evaluating an error propagation from a sign of the output of the average gradient estimator to the true gradient. (Pg 17 “Now based on (52) we define the zeroth-order stochastic gradient of f at point x k : PNG media_image5.png 166 1032 media_image5.png Greyscale PNG media_image6.png 332 1046 media_image6.png Greyscale ” Pg 23 “Since the size of the problem is small, this allows us to compare our solution with the solution obtained from CVX. The test results can be found in Table 2, where “GADM” represents the objective value we discussed above with “time” being the CPU time (in seconds) of GADM, and “CVX” represents the objective value returned by CV” Examiner notes that the average gradient estimator as claimed propagates error due to sign error in quantization.) Regarding Claim 7 Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein teach the method of Claim 1. Further Seidi teaches, embodied in a cloud-computing environment. (¶0021 “FIG. 1 shows an example environment 100 …In some examples, the various devices or components of environment 100 include computing device(s)…In some examples, computing devices 102 and 104 can communicate with external devices via network” Examiner notes that the devices in communication with external computing devices is analogous to cloud computing architecture.) Regarding Claim 8 and 14 Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein teach the method of Claim 1. Claim 8 and 14 are rejected for the reasons set forth in the rejection of claim 1. Further Seide teaches, [from claim 8] A computer program product comprising: one or more computer readable computer-readable storage media; and program instructions stored on the one or more computer readable storage media to perform operations comprising: (¶0052 “In various example, the processing unit(s) 306 can access the module(s) on the computer-readable media 312” Also see claim 11 “A computer-readable medium having thereon computer-executable instructions”) [from claim 14] a computer system comprising: a processor set; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media to cause the processor set to perform operations comprising (¶0052 “In various example, the processing unit(s) 306 can access the module(s) on the computer-readable media 312” Also see claim 11 “A computer-readable medium having thereon computer-executable instructions”) Regarding Claim 13 and 19 These claims are rejected for the reasons set forth in the rejection of claim 6 in view of Duchi/Seide/Duchi2/Wang/Prashanth/Shamir/Bernstein Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNATHAN R GERMICK whose telephone number is (571)272-8363. The examiner can normally be reached on Monday-Friday 7:30 am – 4:00 pm (EST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki, can be reached at telephone number 5712723719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. /J.R.G./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Feb 28, 2019
Application Filed
Apr 05, 2021
Non-Final Rejection — §101, §103
Apr 27, 2021
Response Filed
May 10, 2021
Final Rejection — §101, §103
May 18, 2021
Response after Non-Final Action
Jun 02, 2021
Response after Non-Final Action
Jun 08, 2021
Request for Continued Examination
Jun 13, 2021
Response after Non-Final Action
Oct 22, 2025
Non-Final Rejection — §101, §103
Jan 14, 2026
Examiner Interview Summary
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 26, 2026
Response Filed
Mar 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566962
DITHERED QUANTIZATION OF PARAMETERS DURING TRAINING WITH A MACHINE LEARNING TOOL
2y 5m to grant Granted Mar 03, 2026
Patent 12566983
MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554977
DEEP NEURAL NETWORK FOR MATCHING ENTITIES IN SEMI-STRUCTURED DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12443829
NEURAL NETWORK PROCESSING METHOD AND APPARATUS BASED ON NESTED BIT REPRESENTATION
2y 5m to grant Granted Oct 14, 2025
Patent 12443868
QUANTUM ERROR MITIGATION USING HARDWARE-FRIENDLY PROBABILISTIC ERROR CORRECTION
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
79%
With Interview (+32.1%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 91 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month