Prosecution Insights
Last updated: April 19, 2026
Application No. 17/749,905

SYSTEM AND METHOD FOR MACHINE LEARNING ARCHITECTURE WITH INVERTIBLE NEURAL NETWORKS

Final Rejection §101§103§112
Filed
May 20, 2022
Examiner
JONES, CHARLES JEFFREY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Royal Bank Of Canada
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +66% interview lift
Without
With
+65.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
34.5%
-5.5% vs TC avg
§103
29.1%
-10.9% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is responsive to the amendment filed on 07/23/2025. Claims 1-3, 6-12 and 15-20 have been amended. Claims 1-20 are pending in the case. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 112 Removal Current amendments have overcome the 35 U.S.C. 112(b) rejections concerning claims 1-5, 8, 10-14, 17 and 20 and therefor the 35 U.S.C. 112(b) from the previous action have been removed as the amended claims have provided a remedy. Claim Rejections - 35 USC § 112 Claims 6, 9, 15 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Dependent claims inherit the deficiencies of the independent claims. Claims 6 and 15 recites the limitation select Z to be 0. There is insufficient antecedent basis for this limitation in the claim. For the purposes of compact prosecution the examiner will interpret select Z to be 0 as select a variable to be 0. Claims 9 and 18 recites the limitation latent variable Z. There is insufficient antecedent basis for this limitation in the claim. For the purposes of compact prosecution the examiner will interpret latent variable Z as a variable. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because they claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: The claim recites encode a plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user converting information using mathematical principles. See 2106.04.(a)(2).III.C. The claim recites observing a new input and generating a predicted output for the new input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user choosing a value based on a separate value. See 2106.04.(a)(2).III.C. The claim recites to encapsulate a posterior for the plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user judging all of the information known about a parameter after observing data. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in the do not integrate the abstract idea into a practical application. Specifically, the additional elements: at least one processor; (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) a memory comprising instructions which, when executed by the processor, configure the processor to (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) positioning gated recurrent units at input and output nodes of an invertible neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) training the INN… with the encoded plurality of inputs and associated outputs(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) including a plurality of affine layers in the INN(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (d) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). Additional elements (c) and (e) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) (b) (c) (d) and (e) in Claim 1 do/does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claim 2: The rejection of claim 1 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites producing a point estimate without sampling which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user deciding accuracies and/or values. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in the do not integrate the abstract idea into a practical application. Specifically, the additional elements: training the INN includes(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in Claim 2 do/does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claim 3: The rejection of claim 1 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites include a latent variable Z of dimension sz which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user adding information. See 2106.04.(a)(2).III.C. The claim recites include the plurality of inputs as conditional information to at least one coupling layer which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user choosing information to add that will be transformed used to calculate an inverse transformation. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 3 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 4: The rejection of claim 3 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites combine the plurality of inputs with Z which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user adding a variable with another set of information. See 2106.04.(a)(2).III.C. The claim recites apply the combined Z through the INN to determine the relationship between the plurality of inputs and the associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user using a set of information as an input and deciding how the input changes the output. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 4 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 5: The rejection of claim 4 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites the latent variable Z is sampled many times which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user using data from a set multiple times. See 2106.04.(a)(2).III.C. The claim recites the plurality of inputs are combined with each sample of the latent variable Z which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user adding two variables/sets together. See 2106.04.(a)(2).III.C. The claim recites each combined Z is applied through the INN which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user using a set of information as an input. See 2106.04.(a)(2).III.C. The claim recites a forward function and a corresponding inverse function result from the application of each combined Z through the INN, the forward function and the corresponding inverse function representing the relationship between the plurality of inputs and the associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user creating a function based on deciding the correlation of an output from an input. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 5 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 6: The rejection of claim 2 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites select a variable to be 0 which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites apply an inverse function which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 6 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible Regarding Claim 7: The rejection of claim 2 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites determine a maximum a posterior estimate by applying a transformation to a point of maximum density of a base distribution which is an abstract idea (Mathematical Calculations (see MPEP 2fpred106.04(a)(2)(I)(C))). The claim recites subtract an arithmetic mean of a scaling parameter which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 7 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible Regarding Claim 8: The rejection of claim 1 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites apply at least one of the encapsulated posterior or the point estimate to the new observation which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user checking the accuracies of a calculations. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 8 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible Regarding Claim 9: The rejection of claim 1 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites determine the plurality of associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user deciding outputs. See 2106.04.(a)(2).III.C. The claim recites determine an inverse solution which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user deciding on a solution. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in the do not integrate the abstract idea into a practical application. Specifically, the additional elements: receive the plurality of inputs (which amount to mere extra solution activity of obtaining and/or gathering data over a network, see MPEP §2106.05(g)) send the plurality of associated outputs to an encoder (which amount to mere extra solution activity of obtaining and/or gathering data over a network, see MPEP §2106.05(g)) receive a latent variable Z from the encoder (which amount to mere extra solution activity of obtaining and/or gathering data over a network, see MPEP §2106.05(g)) Subject Matter Eligibility Analysis Step 2B: Additional element (a) (b) and (c) recites receiving and sending inputs/outputs which is a well-understood, routine, and conventional activity of “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i) using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)) The additional element(s) (a), (b) and (c) in Claim 9 do/does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claim 10: The claim recites encode a plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user converting information using mathematical principles. See 2106.04.(a)(2).III.C. The claim recites observing a new input and generating a predicted output for the new input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user choosing a value based on a separate value. See 2106.04.(a)(2).III.C. The claim recites to encapsulate a posterior for the plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user judging all of the information known about a parameter after observing data. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in the do not integrate the abstract idea into a practical application. Specifically, the additional elements: positioning gated recurrent units at input and output nodes of an invertible neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) training the INN… with the encoded plurality of inputs and associated outputs(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) including a plurality of affine layers in the INN(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (d) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). Additional elements (a) and (c) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) (b) and (c) in Claim 10 do/does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claims 11-18: Due to the substantially similar to the limitations and additional elements of Claims 2-9 in Claims 11-18 respectively are rejected under that same 101 claim analysis. Regarding Claim 19: The claim recites encode a plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user converting information using mathematical principles. See 2106.04.(a)(2).III.C. The claim recites observing a new input and generating a predicted output for the new input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user choosing a value based on a separate value. See 2106.04.(a)(2).III.C. The claim recites to encapsulate a posterior for the plurality of inputs and associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user judging all of the information known about a parameter after observing data. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in the do not integrate the abstract idea into a practical application. Specifically, the additional elements: processor; (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) computer readable medium having a non-transitory memory storing a set of instructions (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) positioning gated recurrent units at input and output nodes of an invertible neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) training the INN… with the encoded plurality of inputs and associated outputs(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) including a plurality of affine layers in the INN(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (d) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). Additional elements (c) and (e) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) (b) (c) (d) and (e) in Claim 1 do/does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claim 20: The rejection of claim 19 with is incorporated and further: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites sample a latent variable Z several times which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user using data from a set multiple times. See 2106.04.(a)(2).III.C. The claim recites combine the plurality of inputs with each sampled Z which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user adding two variables/sets together. See 2106.04.(a)(2).III.C. The claim recites apply each combined Z through the INN to determine the relationship between the plurality of inputs and the associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user using a set of information as an input and deciding how the input changes the output. See 2106.04.(a)(2).III.C. The claim recites a forward function and a corresponding inverse function result from the application of each combined Z through an INN, the forward function and the corresponding inverse function representing the relationship between the plurality of inputs and the associated outputs which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user creating a function based on deciding the correlation of an output from an input. See 2106.04.(a)(2).III.C. The claim recites select Z to be 0 which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites apply an inverse function which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites predict the output for the new observation which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a deciding what an output will be for an input. See 2106.04.(a)(2).III.C The claim recites apply at least one of the encapsulated posterior or a point estimate to the new observation which, under the broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations encompass a user checking the accuracies of a calculations. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 20 does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being obvious over Ardizzone et al.(“Analyzing Inverse Problems with Invertible Neural Networks”, henceforth known as Ardizzone) and Zhou et al. (“Selective Encoding for Abstractive Sentence Summarization”, henceforth known as Zhou) Regarding Claim 1: Ardizzone discloses at least one processor; and a memory comprising instructions which, when executed by the processor(“True INNs can be built using coupling layers, as introduced in the NICE … architectures… NICE-like design as a memory-efficient alternative to residual networks” which discloses the use of INN’s in electronics by establishing NICE as an memory-efficient alternatives) Ardizzone discloses an invertible neural network (INN) to encode a plurality of inputs and associated outputs(Ardizzone, Page 16, Paragraph 1, “as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair.”) Ardizzone discloses training the INN, including a plurality of affine layers in the INN(Ardizzone, Page 19, Paragraph 5, “INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers” where the description of the network describing fully connected layers per affine coefficient is considered plurality of affine layers), with the encoded plurality of inputs and associated outputs to encapsulate a posterior for a-the plurality of inputs and associated outputs(Ardizzone, Page 1, Paragraph 3, “Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x ! y and get the inverse y ! x for free by running them backwards at prediction time” where x is an associated input with y) Ardizzone discloses and observing a new input and generating a predicted output for the new input(Ardizzone, Page 2, Figure 1 shows the INN framework is depicted with arrows showing a forward process (x [Wingdings font/0xE0]y), which represents the prediction of output y for a given x and, similarly, the inverse ([y,z] [Wingdings font/0xE0] x)) Ardizzone does not disclose, however Zhou discloses positioning gated recurrent units at input and output nodes of a…neural network(Zhou, Page 4, Figure 2 and Zhou, Page 3, Col. 2, Paragraph 4, “As shown in Figure 2, our model consists of a sentence encoder using the Gated Recurrent Unit (GRU)…and an attention-equipped GRU decoder”, where figure 2 shows an encoder and decoder at the input and output nodes of a neural network and the encoder/decoder are Gated Recurrent Units (GRUs)) References Ardizzone and Zhou are analogous art because they are from the same field of endeavor of using encoder-decoder machine learning models. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Ardizzone and Zhou before him or her, to modify the encoder-decoder model of Ardizzone to include the gated recurrent units of Zhou if y or x is a sequence of states (time series/sequence of latent states). The suggestion/motivation for doing so would have been “For each time step i, the selective gate takes the sentence representation s and BiGRU hidden hi as inputs to compute the gate vector sGatei:”(Zhou, Page 4, Col. 1, Paragraph 3) Regarding Claim 2, the rejection of claim 1 with is incorporated and further: Ardizzone discloses wherein training the INN includes producing a point estimate without sampling(Ardizzone, Page 7, Paragraph 5, “For point estimates ^x, i.e. MAP estimates, we compute the deviation from ground-truth values x* in terms of the RMSE over test set observations y*,” where the point estimates are found by an algorithm rather than sampling is considered INN producing a point estimate without sampling(See Also Ardizzone, Page 8, Paragraph 1, “Each posterior uses 4096 samples, or 256 for ABC; all MAP estimates are found using the mean-shift algorithm” where MAP estimates are found using deterministic mean-shift algorithm)) Regarding Claim 3, the rejection of claim 1 with is incorporated and further: Ardizzone discloses include a latent variable Z of dimension sz; (Ardizzone, Page 2, Paragraph 1, “While this would allow for training of a standard regression model, we want to approximate the full posterior probability. To this end, we introduce a latent random variable z ∈ R K drawn from a multi-variate standard normal distribution and reparametrize q(x | y) in terms of a deterministic function g of y and z, represented by a neural network with parameters θ” where z is a latent variable and is used to estimate the posterior) Ardizzone discloses include the plurality of inputs as conditional information to at least one coupling layer (Ardizzone, Page 4, Paragraph 8, “The basic unit of this network is a reversible block consisting of two complementary affine coupling layers. Hereby, the block’s input vector u is split into two halves, u1 and u2” where two complementary affine coupling layers are considered coupling layers ) Regarding Claim 4, the rejection of claim 3 with is incorporated and further: Ardizzone discloses sample the latent variable Z (Ardizzone, Page 16, Paragraph 4, “To analyze how the latent space of our INN is structured for this task, we choose a fixed label y∗ and sample z from a dense grid” where sample z from a dense grid is considered to sample the latent variable Z) Ardizzone discloses combine the plurality of inputs with Z and apply the combined Z through the INN to determine the relationship between the plurality of inputs and the associated outputs(Ardizzone, Page 4, Equation 1 defines the inverse function as x =g(y,z;Θ) indicating that the input x is reconstructed by combining the observed output y with the latent variable z and is considered combining the plurality of inputs with Z. Similarly, using z to capture additional information between observed output y and the corresponding input x is considered determining the relationship between the plurality of inputs and associated outputs) Regarding Claim 5, the rejection of claim 4 with is incorporated and further: Ardizzone discloses the latent variable Z is sampled many times; (Ardizzone, Page 16, Paragraph 4, “For each z, we compute x through our inverse network and colorize this point in latent (z) space according to the distance from the closest mode in x-space” where for each is considered many times) Ardizzone discloses the plurality of inputs are combined with each sample of the latent variable Z; (Ardizzone, Page 16, Paragraph 4, “For each z, we compute x through our inverse network and colorize this point in latent (z) space according to the distance from the closest mode in x-space” where using x (the input) with each z (latent variable) is considered combining inputs with each sample latent variable) Ardizzone discloses a forward function (Ardizzone, Page 4, Equation 2) and a corresponding inverse function result from the application of each combined Z through the INN (Ardizzone, Page 4, Equation 1 and 2, where equations 1 and 2 show how the INN utilizes the latent variable z to achieve mapping between inputs and outputs by combining z through the INN), the forward function and the corresponding inverse function representing the relationship between the plurality of inputs and the associated outputs(Ardizzone, Page 4, Equations 1 and 2, where the combination of z through the INN and the invertibility using the outputs to learn the inputs is considered a relationship between inputs and outputs) Regarding Claim 6, the rejection of claim 2 with is incorporated and further: Ardizzone discloses select Z to be 0(Ardizzone, Page 14, Equation 11 describes that the Loss of Z = 0) and apply an inverse function(Ardizzone, Page 14, Paragraph 1, “If some bijective function f : x → z transforms a probability density pX(x) to pZ(z), then the inverse function f -1 transforms pZ(z) back to pX(x)” where the inverse function f-1 transforming pZ(z) back to pX(x) is considered applying an inverse function) Regarding Claim 7, the rejection of claim 2 with is incorporated and further: Ardizzone discloses determine a maximum a posterior estimate by applying a transformation(Ardizzone, Page 4, Equation 1, where x = g(y,z;Θ) is a deterministic transformation that maps the observation y and the latent variable to estimate x and is considered a transformation) to a point of maximum density of a base distribution(Ardizzone, Page 4, Equation 1, where N(z; 0; IK) is considered a Gaussian distribution with a maximum density of the distribution occurring at 0) and subtract an arithmetic mean(Ardizzone, Page 5, part of the Theorem between Paragraphs 4-5, Ly = 𝔼 [(y−fy(x))2], where Ly, the expected value(𝔼), is considered an arithmetic mean as the expected value is a measure of the central tendency or average and where subtraction being used create the arithmetic mean is considered to subtract an arithmetic mean) of a scaling parameter(Ardizzone, Page 5, Paragraph 5, “we prove that Lx is guaranteed to be zero when the forward losses Ly and Lz have converged” where Ly scales Lx and is considered a scaling parameter) Regarding Claim 8, the rejection of claim 1 with is incorporated and further: Ardizzone discloses apply at least one of the encapsulated posterior(Ardizzone, Page 2, Paragraph 1, “Thus, the INN represents the desired posterior p(x j y) by a deterministic function x = g(y; z) that transforms (“pushes”) the known distribution p(z) to x-space, conditional on y” where the network applying an estimated posterior to map from the latent variables back to the input space is considered applying an encapsulated posterior) or a point estimate to the new observation(Ardizzone, Page 4, Equation 1, where x= g(y,z;Θ) is used to infer x from new observation y is considered applying a point estimate to a new observation) Regarding Claim 9, the rejection of claim 1 with is incorporated and further: Ardizzone discloses receive the plurality of inputs; determine the plurality of associated outputs; (Ardizzone, Page 1, Paragraph 3, “We can train them on the well-understood forward process x -> y and get the inverse y -> x for free by running them backwards at prediction time” where x is considered an input and y is an associated output with x) send the plurality of associated outputs to an encoder; (Ardizzone, Page 2, Paragraph 1 “we introduce additional latent output variables z, which capture the information about x that is not contained in y” where sending the inputs to capture information in x and not in y is considered sending associated outputs to an encoder) receive the latent variable Z from the encoder (Ardizzone, Page 2, Paragraph 1 “Thus, our INN learns to associate hidden parameter values x with unique pairs [y, z] of measurements and latent variables” is considered receiving latent variables from the encoder) and determine an inverse solution.(Ardizzone, Page 2, Paragraph 1 “Forward training optimizes the mapping [y, z] = f(x) and implicitly determines its inverse x = f-1 (y, z) = g(y, z)” where implicitly determining inverse x = f-1 (y, z) = g(y, z) is considered determining an inverse solution) Regarding Claim 10: Ardizzone discloses an invertible neural network (INN) to encode a plurality of inputs and associated outputs(Ardizzone, Page 16, Paragraph 1, “as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair.”) Ardizzone discloses training the INN, including a plurality of affine layers in the INN(Ardizzone, Page 19, Paragraph 5, “INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers” where the description of the network describing fully connected layers per affine coefficient is considered plurality of affine layers), with the encoded plurality of inputs and associated outputs to encapsulate a posterior for a-the plurality of inputs and associated outputs(Ardizzone, Page 1, Paragraph 3, “Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x ! y and get the inverse y ! x for free by running them backwards at prediction time” where x is an associated input with y) Ardizzone discloses and observing a new input and generating a predicted output for the new input(Ardizzone, Page 2, Figure 1 shows the INN framework is depicted with arrows showing a forward process (x [Wingdings font/0xE0]y), which represents the prediction of output y for a given x and, similarly, the inverse ([y,z] [Wingdings font/0xE0] x)) Ardizzone does not disclose, however Zhou discloses positioning gated recurrent units at input and output nodes of a…neural network(Zhou, Page 4, Figure 2 and Zhou, Page 3, Col. 2, Paragraph 4, “As shown in Figure 2, our model consists of a sentence encoder using the Gated Recurrent Unit (GRU)…and an attention-equipped GRU decoder”, where figure 2 shows an encoder and decoder at the input and output nodes of a neural network and the encoder/decoder are Gated Recurrent Units (GRUs)) References Ardizzone and Zhou are analogous art because they are from the same field of endeavor of using encoder-decoder machine learning models. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Ardizzone and Zhou before him or her, to modify the encoder-decoder model of Ardizzone to include the gated recurrent units of Zhou if y or x is a sequence of states (time series/sequence of latent states). The suggestion/motivation for doing so would have been “For each time step i, the selective gate takes the sentence representation s and BiGRU hidden hi as inputs to compute the gate vector sGatei:”(Zhou, Page 4, Col. 1, Paragraph 3) Regarding claims 11-18: The rejection of Claim 10 is incorporated in Claims 11-18 and, further, Claims 11-18 are rejected under the same rationale as set forth in the rejection of Claim 2-9 respectively due to the substantially similar limitations. Regarding Claim 19: Ardizzone discloses computer readable medium having a non-transitory memory storing a set of instructions(“True INNs can be built using coupling layers, as introduced in the NICE … architectures… NICE-like design as a memory-efficient alternative to residual networks” which discloses the use of INN’s in electronics by establishing NICE as an memory-efficient alternatives) Ardizzone discloses an invertible neural network (INN) to encode a plurality of inputs and associated outputs(Ardizzone, Page 16, Paragraph 1, “as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair.”) Ardizzone discloses train the INN, including a plurality of affine layers in the INN(Ardizzone, Page 19, Paragraph 5, “INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers” where the description of the network describing fully connected layers per affine coefficient is considered plurality of affine layers), with the encoded plurality of inputs and associated outputs to encapsulate a posterior for a the plurality of inputs and associated outputs(Ardizzone, Page 1, Paragraph 3, “Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x ! y and get the inverse y ! x for free by running them backwards at prediction time” where x is an associated input with y) Ardizzone discloses and observing a new input and generating a predicted output for the new input(Ardizzone, Page 2, Figure 1 shows the INN framework is depicted with arrows showing a forward process (x [Wingdings font/0xE0]y), which represents the prediction of output y for a given x and, similarly, the inverse ([y,z] [Wingdings font/0xE0] x)) Ardizzone does not disclose, however Zhou discloses position gated recurrent units at input and output nodes of a…neural network(Zhou, Page 4, Figure 2 and Zhou, Page 3, Col. 2, Paragraph 4, “As shown in Figure 2, our model consists of a sentence encoder using the Gated Recurrent Unit (GRU)…and an attention-equipped GRU decoder”, where figure 2 shows an encoder and decoder at the input and output nodes of a neural network and the encoder/decoder are Gated Recurrent Units (GRUs)) References Ardizzone and Zhou are analogous art because they are from the same field of endeavor of using encoder-decoder machine learning models. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Ardizzone and Zhou before him or her, to modify the encoder-decoder model of Ardizzone to include the gated recurrent units of Zhou if y or x is a sequence of states (time series/sequence of latent states). The suggestion/motivation for doing so would have been “For each time step i, the selective gate takes the sentence representation s and BiGRU hidden hi as inputs to compute the gate vector sGatei:”(Zhou, Page 4, Col. 1, Paragraph 3) Regarding Claim 20, the rejection of claim 19 with is incorporated and further: Ardizzone discloses sample a latent variable Z several times (Ardizzone, Page 16, Paragraph 4, “To analyze how the latent space of our INN is structured for this task, we choose a fixed label y∗ and sample z from a dense grid” where sample z from a dense grid is considered to sample the latent variable Z) Ardizzone discloses combine the plurality of inputs with each sampled Z and apply each combined Z through an INN to determine the relationship between the plurality of inputs and the associated outputs (Ardizzone, Page 4, Equation 1 defines the inverse function as x =g(y,z;Θ) indicating that the input x is reconstructed by combining the observed output y with the latent variable z and is considered combining the plurality of inputs with Z. Similarly, using z to capture additional information between observed output y and the corresponding input x is considered determining the relationship between the plurality of inputs and associated outputs) Ardizzone discloses a forward function (Ardizzone, Page 4, Equation 2) and a corresponding inverse function result from the application of each combined Z through the INN (Ardizzone, Page 4, Equation 1 and 2, where equations 1 and 2 show how the INN utilizes the latent variable z to achieve mapping between inputs and outputs by combining z through the INN) , the forward function and the corresponding inverse function representing the relationship between the plurality of inputs and the associated outputs(Ardizzone, Page 4, Equations 1 and 2, where the combination of z through the INN and the invertibility using the outputs to learn the inputs is considered a relationship between inputs and outputs) Ardizzone discloses select Z to be 0; (Ardizzone, Page 14, Equation 11 asserts that the Loss of Z = 0) and apply an inverse function (Ardizzone, Page 14, Paragraph 1, “If some bijective function f : x → z transforms a probability density pX(x) to pZ(z), then the inverse function f -1 transforms pZ(z) back to pX(x)” where the inverse function f-1 transforming pZ(z) back to pX(x) is considered applying an inverse function) Ardizzone discloses apply at least one of the encapsulated posterior (Ardizzone, Page 2, Paragraph 1, “Thus, the INN represents the desired posterior p(x j y) by a deterministic function x = g(y; z) that transforms (“pushes”) the known distribution p(z) to x-space, conditional on y” where the network applying an estimated posterior to map from the latent variables back to the input space is considered applying an encapsulated posterior) or a point estimate to the new observation(Ardizzone, Page 7, Paragraph 5, “For point estimates ^x, i.e. MAP estimates, we compute the deviation from ground-truth values x* in terms of the RMSE over test set observations y*,” where the point estimates are found by an algorithm rather than sampling is considered INN producing a point estimate without sampling(See Also Ardizzone, Page 8, Paragraph 1, “Each posterior uses 4096 samples, or 256 for ABC; all MAP estimates are found using the mean-shift algorithm” where MAP estimates are found using deterministic mean-shift algorithm)) Response to Arguments Applicant's arguments filed 07/23/2025 have been fully considered but they are not persuasive. An explanation and breakdown can be found below: 102/103: Applicant appears to argue amended language, specifically positioning gated recurrent units at input and output nodes of an invertible neural network(INN) to encode a plurality of inputs and associated outputs from amended claims, is not disclosed in Ardizzone. Examiner agrees that amended language is not found in Ardizzone, however the updated rejection does not rely on Ardizzone for the cited amended limitations, and ,. New prior art Zhou has been added that discloses having a gated recurrent neural network at the input and nodes as set forth in the 103 section above. 101: Applicant appears to argue on page 8 that there is no in claim 1 abstract idea. Examiner respectfully disagrees that there is no claim abstract idea in claim 1 as encode a plurality of inputs and associated outputs, observing a new input and generating a predicted output for the new input and encapsulating a posterior for the plurality of inputs and associated outputs are mapped as abstract ideas and a mental/abstract idea that requires a computer may still recite a mental process(please see MPEP 2106.04(a)(2).III.C)). Applicant appears to argue on page 8 that the amended claims are similar to Example 47 in the July 2024 Subject Matter Eligibility Examples due to both Example 47 and the current claim citing a GRU placement at the beginning and end of the neural network.. Examiner respectfully disagrees as each application must be viewed on their own merits. Example 47’s claim 1 is cited as not citing an abstract ideas and, as noted above, the amended claim 1 has multiple mental/abstract ideas. Further, Example 47’s neurons are cited as hardware components with structural information and synaptic circuits that together form an ANN. Examiner does not find the amended claims to have similarly defined structural information that would be comparable to Example 47’s architecture. Applicant appears to argue on pages 8-9 that the claimed architecture and interaction of components integrates into a practical application. Examiner respectfully disagrees as under step 2a examiner has identified a judicial exception and under step 2a and 2b examiner has identified the additional elements together and as a whole as corresponding to those limitations which are not indicative of practical application or significantly more. The claimed architecture of the INN and gated recurrent units are recited at a high level of generality it amounts to using a generic computer component that is implementing a process. Claims that require a computer may still recite a mental process (please see MPEP 2106.04(a)(2).III.C). Further, claims as presented do not result or highlight an improvement in neural networks, GRU’s or hardware processors. Examiner notes MPEP 2106.05(a) which provides the requirements for how an improvement to the functioning of a computer or to any other technology or technical field is evaluated. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after
Read full office action

Prosecution Timeline

May 20, 2022
Application Filed
Jan 08, 2025
Non-Final Rejection — §101, §103, §112
Jul 23, 2025
Response Filed
Oct 30, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582959
DATA GENERATION DEVICE AND METHOD, AND LEARNING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12380333
METHOD OF CONSTRUCTING NETWORK MODEL FOR DEEP LEARNING, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
93%
With Interview (+65.9%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month