Prosecution Insights
Last updated: April 19, 2026
Application No. 18/184,791

METHOD FOR DETERMINING OPTIMIZED BASIS FUNCTIONS FOR DESCRIBING TRAJECTORIES

Non-Final OA §101§103§112§DP
Filed
Mar 16, 2023
Examiner
HOOVER, BRENT JOHNSTON
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
297 granted / 359 resolved
+27.7% vs TC avg
Strong +23% interview lift
Without
With
+22.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
24 currently pending
Career history
383
Total Applications
across all art units

Statute-Specific Performance

§101
31.4%
-8.6% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 359 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the original application filed on 3/16/2023 and the Preliminary Amendment filed on 11/14/2023. Acknowledgment is made with respect to a claim of priority to German Application DE10202202718.3 filed on 3/21/2023. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 15-28 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. None of the subject matter that is claimed in any of independent claims 15, 25, 26, 27, or 28 and their dependent claims is described in the originally filed specification of the present application or in German Application DE10202202718.3, the application to which the present application claims priority. For example, the originally filed specification does not disclose a latent representation or an invertible factorization model. Consequently, claims 15-28 are rejected under 35 USC § 112(a) for a lack of written description. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 22 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 22 recites “(F1, F2, F3, F4).” While these terms are understood to be at least broadly related to the claimed “function,” the terms are not defined and it is not clear what they are intended to mean. The use of subscripts suggests that these functions are related to each other, but no relationship is specified. The use of letters is known in the art to indicate a variable. However, a letter may also serve to indicate a constant value (e.g. Euler's Number 'e' is a numerical constant). While the specification includes a description of potentially similar functions at p. 15 line 23 – 24 (i.e. “a plurality of functions (F1, F2, F3, F4)”), limitations from the specification are not read into the claims. Regardless, the specification fails to provide a clear definition for the terms as used in the claims. Additionally, while the claim limitation is directed to a single function (i.e. “a function (F1, F2, F3, F4)”), p. 15 line 23 – 24 of Applicant’s specification includes a description of “a plurality of functions.” In this respect, when relying upon the limited disclosure of p. 15 line 23 – 24, it is not clear if the claim limitation is referring to a function F1, a function F2, a function F3, or a function F4, or alternatively, if the group of functions is provided as a single function. Clarification is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 15 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 15 of US Patent Application 17/448,144. That is, claim 15 of US Patent Application 17/448,144 discloses the determining a latent representation, the invertible factorization model, and the determining the output signal as claimed. Claim 16 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 15 of US Patent Application 17/448,144. That is, claim 15 of US Patent Application 17/448,144 discloses the plurality of functions as claimed. Claim 17 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 16 of US Patent Application 17/448,144. That is, claim 16 of US Patent Application 17/448,144 discloses the classifier training as claimed. Claim 18 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 17 of US Patent Application 17/448,144. That is, claim 17 of US Patent Application 17/448,144 discloses the functions as claimed. Claim 19 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 18 of US Patent Application 17/448,144. That is, claim 18 of US Patent Application 17/448,144 discloses the encoder training as claimed. Claim 20 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 19 of US Patent Application 17/448,144. That is, claim 19 of US Patent Application 17/448,144 discloses the decoder training as claimed. Claim 21 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 20 of US Patent Application 17/448,144. That is, claim 20 of US Patent Application 17/448,144 discloses the functions as claimed. Claim 22 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 21 of US Patent Application 17/448,144. That is, claim 21 of US Patent Application 17/448,144 discloses the functions as claimed. Claim 23 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 22 of US Patent Application 17/448,144. That is, claim 22 of US Patent Application 17/448,144 discloses the linear classifier as claimed. Claim 24 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 23 of US Patent Application 17/448,144. That is, claim 23 of US Patent Application 17/448,144 discloses the training as claimed. Claim 25 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 24 of US Patent Application 17/448,144. That is, claim 24 of US Patent Application 17/448,144 discloses the invertible factorization model as claimed. Claim 26 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 25 of US Patent Application 17/448,144. That is, claim 25 of US Patent Application 17/448,144 discloses the determining a latent representation, the invertible factorization model, and the determining the output signal as claimed. Claim 27 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 26 of US Patent Application 17/448,144. That is, claim 26 of US Patent Application 17/448,144 discloses the training system, the determining a latent representation, the invertible factorization model, and the determining the output signal as claimed. Claim 28 is provisionally rejected on the ground of nonstatutory double patenting as being anticipated by claim 27 of US Patent Application 17/448,144. That is, claim 27 of US Patent Application 17/448,144 discloses the training system, the determining a latent representation, the invertible factorization model, and the determining the output signal as claimed. This is a provisional double patenting rejection because the claims of the co-pending application (US Patent Application 17/448,144) of the present application have not been patented. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 25-27 are rejected under 35 U.S.C. § 101 because the claimed invention is directed towards non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subjected matter because the claimed invention is directed towards software per se. With respect to independent claims 25-27, the claims fail to recite any structure to implement the model (claim 25), the classifier (claim 26), or the training system (claim 27). Under a broadest reasonable interpretation of the claim language, the model (claim 25), the classifier (claim 26), or the training system are all therefore directed towards software per se. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15, 16, 18-19, 23, 25, 26, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Esser et al. (Esser et al., “A Disentangling Invertible Interpretation Network for Explaining Latent Representations”, Apr. 27, 2020, arXiv:2004.13166v1, pp. 1-19, hereinafter “Esser”) in view of Newby et al. (US 20190311482 A1, hereinafter “Newby”) and Kingma et al., (Kingma et al., “Glow: Generative Flow with Invertible 1X1 Convolutions”, Jul. 10, 2108, arXiv:1807.03039v2, pp. 1-15, hereinafter “Kingma”). In regard to claim 15, Esser discloses: 15. A computer-implemented method for determining an output signal for an input signal using a classifier, wherein the output signal characterizes a classification of the input signal, the method comprising the following steps: See Esser, last paragraph of section 1 on p. 2, “we present a new approach to the interpretability of neural networks, which can be applied to arbitrary existing models without affecting their performance.” determining a latent representation based on the input signal using an invertible factorization model comprised in the classifier, wherein the latent representation includes a plurality of factors, and See Esser, Fig. 1 on p. 1, depicting “Invertible Interpretation Network T” which provides a latent representation based on an input signal. Also see p. 3, “To turn z into an interpretable representation, we aim to translate the distributed representation z to a factorized representation where each of the K + 1 factors , with , represents an interpretable concept.” wherein the invertible factorization model includes: a plurality of functions, See Esser, section 3.1 Also see Esser, Fig. 11 on p. 11, section A.1, depicting Network T as a sequence of coupled blocks providing functions for splitting an input into a factorized representation. Esser does not expressly disclose: wherein each function from the plurality of functions is continuous and continuously differentiable, However, this is taught by Newby. See Newby ¶ 0030, “Instead, we use a function with a similar shape that is also continuously differentiable, which helps minimize training iterations where the model is stuck in local minima.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Newby’s function with Esser’s map in order to help minimize training iterations as suggested by Newby. Esser also discloses: wherein the function is further configured to accept either the input signal or at least one factor of the plurality of factors provided by another function of the plurality of functions as an input, See Esser, Fig. 11 on p. 11, depicting a network T which utilizes functions for processing factors from an input. Also see p. 4, right column, “To be able to compute the Jacobian determinant efficiently, we follow previous works [22] and build T based on ActNorm, AffineCoupling and Shuffling layers as described in more detail in Sec. A.1 of the supplementary.” Note that Esser cites reference “22” Kingma et al. “Glow: Generative Flow with Invertible 1X1 Convolutions” which provides a description of block flow steps (i.e. functions) which process either an input or the output of a prior function. (See Kingma et al., Fig. 2(b)). All of the elements of the claims are known in Esser and Kingma. The only difference is the express teaching of block flow steps in Kingma which is utilized in Esser. It would have been obvious to one of ordinary skill in the art to use Kingma’s block flow steps, since Esser already teaches their use to achieve the predictable results of factor processing. Esser also discloses: wherein the function is further configured to provide at least one factor of the plurality of factors, wherein the at least one factor is either provided as at least part of the latent representation or is provided as at least part of an input of another function of the plurality of functions, See Esser as informed by Kingma as cited above. Also see Esser, Fig. 1, depicting latent representations associated with the term “z.” Also see bottom left of p. 2, e.g. “Given an autoencoder representation, its representation is mapped onto interpretable factors.” Also see p. 3, section 3.1, “With this inverse map, T-1, an internal representation z can be mapped to z ̃, modified in semantically meaningful ways to obtain z ̃* (e.g. changing a single interpretable concept), and mapped back to an internal representation of f.” wherein the plurality of functions comprises at least one function that provides a first factor to the latent representation … provides a second factor to another function from the plurality of functions, See Esser as informed by Kingma as cited above. Also see Esser, Fig. 1, depicting functions (i.e. “Network T”) for latent representation, and Fig. 11 depicting multiple function blocks for transforming input z utilizing functions for providing factors for a latent representation z ̃: A sequence of these three layers builds one invertible block, c.f . Fig. 12. After passing the input z through multiple blocks, c.f . Fig. 11, we split the output z ̃ into factors 〖(z ̃_k)〗_(k=0)^K. Esser does not expressly disclose providing factors to latent representation and another function. However, this is taught by Kingma. See Kingma Fig. 2(b) depicting a multi-scale architecture providing an output to representation zi as well as output to another function block. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Kingma’s multi-scale architecture in Esser’s transformation in order to utilize efficient computation as suggested by Esser (see Esser, p. 4, top right column). wherein the plurality of factors of the latent representation are provided at least by two different functions of the plurality of functions, See Esser, Fig. 11 on p. 11, section A.1, depicting Network T as a sequence of coupled blocks providing functions for splitting an input into a factorized representation. Also see Kingma, p. 6, “Each step of flow above should be preceded by some kind of permutation of the variables.” Kingma’s variable permutation represents a different functionality between each step of the flow. wherein a respective function of the plurality of functions of the invertible factorization model is invertible, See Esser, section 4.2, describing invertible functions:First, we train the translator T to disentangle K (plus a residual) distinct factors z ̃_k. For evaluation we modify a single factor z ̃_k while keeping all others fixed. … This yields a sequence of modified factors (z ̃_k^((1)), z ̃_k^((2)), … , z ̃_k^((n))) when performing n modification steps. We invert every element in this sequence back to its hidden representation and apply the classifier. wherein for each respective function of the plurality of functions there exists a corresponding inverse function, … and is configured to determine the input of the respective function of the plurality of functions based on the at least one factor provided from the plurality of functions; and See Esser, p. 3, right column, e.g. “On the other hand, it should enable semantic modifications on internal representations of f; this requires the inverse of T. With this inverse map, T-1, an internal representation z can be mapped to z ̃, modified in semantically meaningful ways to obtain z ̃* (e.g. changing a single interpretable concept), and mapped back to an internal representation of f.” Also see at least section 4.2, describing invertible functions “For evaluation we modify a single factor z ̃_k while keeping all others fixed. … We invert every element in this sequence back to its hidden representation and apply the classifier.” Esser does not expressly disclose wherein each inverse function is continuous, is continuously differentiable … However, this is taught by Newby as cited above. See Newby ¶ 0030. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Newby’s function with Esser’s inverse functions in order to help minimize training iterations as suggested by Newby. Esser also discloses: determining the output signal based on the latent representation using an internal classifier comprised in the classifier. See Esser, Fig. 1 on p. 1, depicting the output associated with the latent representation. Also see section 4.2, “We invert every element in this sequence back to its hidden representation and apply the classifier.” In regard to claim 16, Esser discloses: The method according to claim 15, wherein the plurality of functions includes at least one function that provides a first factor to the latent representation and provides a second factor to another function from the plurality of functions: See Esser as informed by Kingma as cited above. Also see Esser, Fig. 1, depicting functions (i.e. “Network T”) for latent representation, and Fig. 11 depicting multiple function blocks for transforming input z utilizing functions for providing factors for a latent representation z ̃: A sequence of these three layers builds one invertible block, c.f . Fig. 12. After passing the input z through multiple blocks, c.f . Fig. 11, we split the output z ̃ into factors 〖(z ̃_k)〗_(k=0)^K. In regard to claim 18, Esser discloses: 18. The method according to claim 15, wherein the plurality of functions includes at least one function that is configured to provide two factors according to the formula … wherein x(i) is an input of the function, z1 is a first factor, z2 is a second factor, ε is an encoder of an autoencoder and D is a decoder of the autoencoder, See Esser, Fig. 1 and bottom of p. 3, “This way, semantic modifications, , which were previously only defined on ~z can be applied to internal representations via . See Fig. 2 for an example, where z is modified by replacing one of its semantic factors ~zk with that of another image.” Note that Esser describes removing a decoded factor from an input using an encoded input which applies to a broad but reasonable interpretation of the claim terms. further wherein an inverse function corresponding with the function is given by.. See Esser p. 3, “With this inverse map, T-1, an internal representation z can be mapped to ~z, modified in semantically meaningful ways to obtain ~z* (e.g. changing a single interpretable concept), and mapped back to an internal representation of f. … z is modified by replacing one of its semantic factors ~zk with that of another image.” In regard to claim 19, Esser does not expressly disclose: 19. The method according to claim 18, wherein the encoder is trained based on a gradient of a plurality of parameters of the encoder with respect to [the a] first difference. However, this is taught by Newby. See ¶ 0034, e.g. “At each iteration of the training procedure, a randomly generated training image is processed by the network, the error H[p,q] is computed, and all of the trainable parameters are altered by a small amount (using the gradient decent [sic] method explained below) to reduce the observed error. This training procedure is repeated thousands of times until the error is minimized.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Newby’s gradient descent method in Esser’s training in order to reduce error as suggested by Newby. In regard to claim 23, Esser discloses: 23. The method according to claim 15, wherein the internal classifier is a linear classifier. See Esser, Table 5 in section A.3 on p. 12, which lists a linear classifier. In regard to claim 25, Esser discloses: 25. An invertible factorization model, comprising: See Esser, Fig. 1, depicting an invertible factorization model. All further limitations of claim 25 have been addressed in the above rejection of claim 15. In regard to claim 26, Esser discloses: 26. A classifier, configured to: determine a latent representation based on the input signal using an invertible factorization model comprised in the classifier, See Esser, Fig. 1, depicting a classifier which uses an invertible factorization model to determine a latent representation. All further limitations of claim 26 have been addressed in the above rejection of claim 15. In regard to claim 28, Esser discloses: 28. A non-transitory machine-readable storage medium on which is stored a computer program for determining an output signal for an input signal using a classifier, wherein the output signal characterizes a classification of the input signal, the computer program, when executed by a processor, causing the processor to perform the following steps: See Esser, Fig. 1, along with the discussion of experiments in section 4 which rely upon various datasets requiring storage media along with computer programming. All further limitations of claim 28 have been addressed in the above rejection of claim 15. Claims 17, 24, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Esser in view of Newby and Kingma as applied above, and further in view of U.S. Zhang et al. (US 20170124379 A1, "Zhang"). In regard to claim 17, Esser discloses: 17. The method according to claim 15, wherein the classifier is trained based on a first difference between … [signals] See Esser, p. 4, right column, e.g. “For training we use the negative log-likelihood as our loss function.” Esser does not expressly disclose: a determined output signal and a desired output, wherein the determined output signal is determined for a training input signal and the desired output signal characterizes a desired classification of the training input signal. However, this is taught by Zhang. See Zhang, ¶ 0058, “The classifier 31 may trained using the first output result of the labeled fingerprint sample through a standard supervised training method, for example, gradient descent method, of a standard multi-layer neural network.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Zhang’s training with Esser’s network in order to train a network as suggested by Zhang and well-known to one of ordinary skill in the art. In regard to claim 24, Esser discloses: 24. The method according to claim 17, wherein training the classifier further comprises training the invertible factorization model using a neural architecture search algorithm, wherein an objective function of the neural architecture search algorithm is based on a disentanglement of a plurality of factors of the latent representation. See Esser, at least section 4.2, “First, we train the translator T to disentangle K (plus a residual) distinct factors ~zk.” In regard to claim 27, Esser discloses: 27. A training system configured to train a classifier, the classifier configured to: See Esser, Fig. 1, broadly depicting a training system. All further limitations of claim 27 have been addressed in the above rejections of claims 15 and 16. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Esser in view of Newby and Kingma as applied above, and further in view of Hadap et al. (US 20180365874 A1, "Hadap"). In regard to claim 20, Esser discloses: 20. The method according to claim 19, wherein the decoder is trained … See Esser, p. 4, right column, e.g. “For training we use the negative log-likelihood as our loss function.” Esser does not expressly disclose: … based on a gradient of a plurality of parameters of the decoder with respect to a second difference … wherein x(i) is an input of the function and the sum is over all squared elements of the subtraction. However, this is taught by Hadap. See Hadap, ¶ 0024, “… employing an optimization method such as gradient descent. The learning phase will typically utilize a set of training data and validation data. Full batch learning, mini-batch learning, stochastic gradient descent or any other training methods may be employed.” Also see ¶ 0042, “The L2 loss functions shown in FIG. 2c, also known as least squares error (“LSE”), minimize a sum of squares of the difference between a target value y and estimate value h(x) as follows: L2=∑_(i=0)^n▒〖(y_i-h(x_i))〗^2 .” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Hadap’s LSE with Esser’s training in order to codify the intercoupling between artificial neural units as suggested by Hadap (see ¶ 0024). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Esser in view of Newby and Kingma as applied above, and further in view of Shillingford et al. (US 20210110831 A1, "Shillingford") and Kursun et al. (US 20210173905 A1,"Kursun"). In regard to claim 21, Esser discloses: 21. The method according to claim 15, wherein the plurality of functions includes at least one function that is configured to provide three factors according to the formula … wherein x(i) is the input of the function, z1 is a first factor and a result of applying a … normalization to the input , … the … normalization further depends on a scale parameter γ and a shift parameter β, … See Esser, p. 11, section A.1, “A sequence of these three layers builds one invertible block, c.f . Fig. 12.” Also see p. 12, section A.1, e.g. “Actnorm consists of learnable shift and scale parameters for each channel, which are initialized to provide activations with zero mean and unit variance.” Esser does not expressly disclose: group normalization. However, this is taught by Shillingford. See Shillingford, ¶ 0055, “A group normalization layer … The normalization statistics may comprise a mean μ and/or standard deviation σ or variance σ2.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Shillingford’s group normalization with Esser’s normalization in order to facilitate distributing computation across multiple processing units by reducing communication between units as suggested by Shillingford (see ¶ 0010). Esser does not expressly disclose: … z2 is a second factor and an expected value of the group normalization, z3 is a third factor and a standard deviation of the group normalization and … wherein an inverse of the function is given by… However, this is taught by Kursun and Al-Fattah. See Kursun, ¶ 0092, “As illustrated in FIG. 11, the output, yt, of a first layer, N1, is normalized by subtracting a batch mean, μ, and dividing by the batch standard deviation, σ. In order to maintain optimal weighting in the next layer, N2, the batch normalization multiples the normalized output, ŷ, by a standard deviation parameter, γ, and adds a mean parameter, β.” Also see Al-Fattah, ¶ 0030, “In this study, two normalization algorithms were applied: mean/standard deviation … After network execution, de-normalizing of the output follows the reverse procedure: subtraction of the shift factor, followed by division by the scale factor.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the normalization calculations of Kursun and Al-Fattah with Esser’s functions in order to provide a smoothing affect for training and remove unwanted side effects that can affect accuracy as suggested by Kursun (see ¶ 0049). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Esser in view of Newby and Kingma as applied above, and further in view of Kim et al. (US 10049323 B2, "Kim"). In regard to claim 22, Esser discloses: 21. The method according to claim 15, wherein the plurality of functions includes at least one function that provides two factors … wherein x(i) is the input of the function, z1 is a first factor, z2 is a second factor and ReLU is a rectified linear unit, … function (F1, F2, F3, F4) See Esser, p. 4, “Each semantic concept F ∈ {1, . . . , K}. … To fit this model to data, we utilize the invertibility of T to directly compute and maximize the likelihood of pairs (zᵃ, zᵇ) = (E(xᵃ), E(xᵇ)).” Also see p. 12, Table 3, disclosing the use of “LeakyReLU.” Esser does not expressly disclose: factors according to the formula … further wherein the inverse function for the function (F1, F2, F3, F4) is given by … However, this is taught by Kim. See Kim, Fig. 1 and col. 2, lines 17-29, e.g. “The CReLU is an activation scheme that additionally makes negative activation as well as positive activation of the ReLU just as following Formula. CReLU(x)=(ReLU(x),ReLU(−x))=(max(0,x),max(0,−x)) … In FIG. 1, the CReLU 120 includes one scale layer 121, the two ReLU layers 122, and a concatenation layer 123.” Note that Esser p. 14 teaches: “We modify the factor ~zk corresponding to the semantic concept, invert the modified representation back to the latent space of the autoencoder.” As depicted in Fig. 1, a CReLU operation involves concatenation. The inverse of a CReLU operation would naturally be to remove the concatenation, and as such, the operation “z1-z2“ is applicable. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Kim’s CReLU operation with Esser’s network in order to increase a detection accuracy while maintaining a detection speed as suggested by Kim (see col. 2, lines 23-27). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brent Hoover whose telephone number is (303)297-4403. The examiner can normally be reached Monday - Friday 9-5 MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached on 571-270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRENT JOHNSTON HOOVER/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Nov 14, 2023
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §101, §103, §112
Apr 07, 2026
Response Filed
Apr 07, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602613
PRIVACY-ENHANCED TRAINING AND DEPLOYMENT OF MACHINE LEARNING MODELS USING CLIENT-SIDE AND SERVER-SIDE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12603147
PREDICTING PROTEIN STRUCTURES USING AUXILIARY FOLDING NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12585926
ADJUSTING PRECISION AND TOPOLOGY PARAMETERS FOR NEURAL NETWORK TRAINING BASED ON A PERFORMANCE METRIC
2y 5m to grant Granted Mar 24, 2026
Patent 12585934
COMPRESSING TOKENS BASED ON POSITIONS FOR TRANSFORMER MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579215
LEARNING ORDINAL REGRESSION MODEL VIA DIVIDE-AND-CONQUER TECHNIQUE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+22.7%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 359 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month