Prosecution Insights
Last updated: April 19, 2026
Application No. 17/852,450

Running Bidirectional Recurrent Neural Networks in Hardware

Non-Final OA §102§103§112
Filed
Jun 29, 2022
Examiner
VAUGHN, RYAN C
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
145 granted / 235 resolved
+6.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
45 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on June 29, 2022; November 11, 2022; August 22, 2025; and September 19, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings The drawings are objected to because reference characters 602, 917, and 1100 appear in the drawings but not the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification Examiner objects to the specification for containing various grammatical informalities. Examiner has attached a marked-up copy of the specification indicating where errors have occurred. To the extent that the markings are not self-explanatory and are not corrected, Examiner will enumerate the remaining objections in a subsequent Office Action. Claim Objections Claim 15 is objected to because of the following informalities: “on corresponding forward and backward state” should be “on corresponding forward and backward states”. Claim 16 is objected to for dependency on claim 15. Claim 19 is objected to because of the following informalities: “at the hardware” should be “at the hardware accelerator”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “transformation unit” in claim 17 and “control logic” in claims 17 and 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 17-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim limitations “transformation unit” and “control logic” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions and to clearly link the structure, material, or acts to the functions. Therefore, it is unclear whether Applicant had possession of the claimed invention at the time of filing. See analysis under 35 USC § 112 infra for further detail. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 17-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitations “transformation unit” and “control logic” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions and to clearly link the structure, material, or acts to the functions. Regarding the “transformation unit,” the specification at paragraphs 122-128 and Figure 8, among others, describe the operations of the transformation unit. However, the description does little more than repeat the claim language reciting the generation of the forward and backward RNNs and unrolling them without providing an algorithm for how those operations are accomplished. Regarding the “control logic,” at most paragraph 116 indicates that the control logic may comprise software, firmware, or a dedicated processor, but does not meaningfully detail how the control logic performs the claimed function of “implement[ing] the derivative neural network at the hardware accelerator so as to perform the BRNN on the sequence of inputs.” Indeed, that same paragraph attempts to justify the lack of disclosure of an algorithm sufficient for performing the entire claimed functions by stating that the use of control logic is “known in the art”. But Applicant cannot rely on what it takes to be common knowledge in the art to provide the structure, material, or acts for performing the claimed functions; rather, it must positively identify what that structure or material or what those acts are. Therefore, the claims are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claims so that the claim limitations will no longer be interpreted as limitations under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed functions, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the functions recited in the claims, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the functions so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed functions, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed functions and clearly links or associates the structure, material, or acts to the claimed functions, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed functions. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. All claims dependent on a claim rejected hereunder are also rejected for being dependent on a rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4-7, and 13-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Vivekraja et al. (US 12400137) (“Vivekraja”). Regarding claim 1, Vivekraja discloses “[a] method of implementing in hardware a bidirectional recurrent neural network (BRNN) for operation on a sequence of inputs, each step of the BRNN being for operation on (a) an input of the sequence, (b) corresponding backward state generated in respect of a subsequent input of the sequence, and (c) corresponding forward state generated in respect of a preceding input of the sequence (input [input of sequence] of bidirectional RNN is provided to left-to-right layer is equal to the overall input; left-to-right layer receives the input value Input [0] [preceding input] and an initial state and produces the output value LR Output [0] and the state value LR State [0] [forward state] based on the input and initial state; also, the right-to-left layer receives the input value Input [0] [subsequent input] and the state RL State [1] and produces the output value RL Output [N – 1] and the state RL State [0] [backward state] – Vivekraja, col. 6, ll. 5-16), the method comprising: receiving a representation of the BRNN (bidirectional RNN layer has additional transformation layers for modifying the input and output data of a left-to-right layer [i.e., a representation of the BRNN is received] – Vivekraja, col. 7, ll. 16-28); transforming the representation of the BRNN into a derivative neural network equivalent to the BRNN over the sequence of inputs (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) [forward + backward RNNs = derivative NN] – Vivekraja, col. 7, ll. 16-28), the derivative neural network comprising: a forward recurrent neural network (RNN) for operation on the forward state over the inputs of the sequence (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) – Vivekraja, col. 7, ll. 16-28), and a backward recurrent neural network (RNN) for operation on the backward state over the inputs of the sequence (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) – Vivekraja, col. 7, ll. 16-28), the forward and backward RNNs being unrolled over the inputs of the sequence (bidirectional RNN layer may be unrolled into multiple RNN cells that each correspond to the same one or more operations – Vivekraja, col. 7, ll. 16-28); and implementing the derivative neural network in hardware so as to perform the BRNN on the sequence of inputs (embodiments overcome the inability of data-flow centric processors [hardware] to execute bidirectional RNNs by providing techniques for modifying the tensors fed into and/or produced by the bidirectional RNNs at various stages/layers within the bidirectional RNNs [i.e., the BRNNs, including its derivative forward and backward unrolled versions, are implemented in the hardware] – Vivekraja, col. 3, ll. 7-25).” Claim 20 is a non-transitory computer-readable medium claim corresponding to method claim 1 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 2, Vivekraja discloses that “each step of the derivative neural network is for operation on a different input of the sequence, and … the sequence of inputs comprises a predefined plurality of inputs (Vivekraja Fig. 3 shows that each of the forward and backward RNN iterations operates on one of inputs Input [0] to Input [N – 1] and that that there are N inputs [i.e., a predefined plurality]).” Regarding claim 4, Vivekraja discloses that “for each of the sequence of inputs, the steps of the forward and backward RNNs for operation over that input are in combination equivalent to the step of the BRNN for operation on that input (Vivekraja Fig. 3 and col. 5, l. 56-col. 6, l. 44 disclose that the BRNN 300 operates on input 310 and generates output 320, and when unrolled and decomposed into forward and backward RNNs, that input is decomposed into a sequence Input [0] to Input [N – 1] and each input of the sequence is processed by a different iteration of the BRNN cell, with the result that the same final output 320 is produced [i.e., the steps of the original and decomposed NNs yield equivalent results]).” Regarding claim 5, Vivekraja discloses that “the transforming the representation of the BRNN into a derivative neural network comprises: forming the forward RNN by grouping together operations of the BRNN performed in dependence on the forward state (Vivekraja Fig. 3 shows that the operations of generating LR Outputs [0]-[N – 1] are performed by the forward RNN [i.e., grouped as operations to be performed thereby] in dependence on LR states [0] – [N – 2] [forward states]); and forming the backward RNN by grouping together operations of the BRNN performed in dependence on the backward state (Vivekraja Fig. 3 shows that the operations of generating RL Outputs [0]-[N – 1] are performed by the reverse RNN [i.e., grouped as operations to be performed thereby] in dependence on RL states [N - 1] – [1] [backward states]).” Regarding claim 6, Vivekraja discloses that “the forward and backward operations at each step of the BRNN are independent, each step of the forward RNN is for operation on an input of the sequence and its corresponding forward state, and each step of the backward RNN being for operation on an input of the sequence and its corresponding backward state (Vivekraja Fig. 3 shows that the BRNN is broken down into forward and reverse RNNs, that the first iteration of the forward RNN cell takes an initial LR state [forward state] and an input Input [0] [input of sequence] and produces LR State [0], the second iteration takes LR State [0] and Input [1] as input and produces LR State [1], etc., and that conversely, the first iteration of the reverse RNN cell takes an initial RL state [backward state] and an input Input [N - 1] [input of sequence] and produces RL State [N - 1], the second iteration takes RL State [N - 1] and Input [N - 2] as input and produces LR State [N - 2], etc.).” Regarding claim 7, Vivekraja discloses that “the implementing the derivative neural network comprises implementing the forward and backward RNNs for concurrent operation at the hardware so as to perform the forward and backward RNNs in parallel (acceleration engine can be a neural network accelerator that may be able to perform large scale, parallel computations of a neural network more efficiently than when the computations are performed by a CPU – Vivekraja, col. 11, ll. 42-58; col. 6, ll. 5-16 disclose that for a first iteration, the left-to-right layer produces the value LR Output [0] and state LR State [0] and that, in the same iteration, the right-to-left layer produces the output value RL Output [N – 1] and the state RL State [0] [i.e., the forward and backward operations occur concurrently]).” Regarding claim 13, Vivekraja discloses that “the implementing the derivative neural network in hardware comprises initialising the derivative neural network with initial forward and backward input state values for each pair of forward and backward RNNs (Vivekraja Fig. 3 discloses that the forward network of the unrolled and decomposed BRNN is initialized with an initial LR state and that the reverse network is initialized with an initial RL state).” Regarding claim 14, Vivekraja discloses that “the implementing the derivative neural network in hardware comprises allocating forward and backward indices to each input of the input sequence such that the forward RNN references its inputs using the forward index and the backward RNN references its inputs using the backward index, the forward and backward indices being arranged such that a first input of the sequence according to the forward index is the last input of the sequence according to the backward index, and a first input of the sequence according to the backward index is the last input of the sequence according to the forward index (Vivekraja Fig. 3 shows that each forward and backward sequence is indexed from 0 to N – 1, that the backward RNN references its inputs in the order N – 1, N – 2, …, 0 and that the forward RNN references its inputs in the order 0, 1, …, N – 1 [i.e., N – 1 is the last index of the forward sequence and the first of the backward sequence, and 0 is the first index of the forward sequence and the last of the backward sequence]).” Regarding claim 15, Vivekraja discloses that “the BRNN comprises a plurality of stacked layers each representing a bidirectional recurrent neural network1 (Vivekraja col. 3, ll. 7-25 disclose that the BRNN contains multiple [stacked] layers; Fig. 3 discloses that the each layer is decomposed into forward and backward layers that collectively perform BRNN functions for that layer), and the transforming the representation of the BRNN is performed in respect of each of the layers of the BRNN so as to generate a pair of forward and backward RNNs for each layer of the BRNN, each pair of forward and backward RNNs operating in dependence on corresponding forward and backward state[s] of that pair (Vivekraja Fig. 3 discloses that each layer of the BRNN is decomposed into forward and backward layers that operate in dependence on forward (LR) states and backward (RL) states, respectively).” Regarding claim 16, Vivekraja discloses that “the transforming the BRNN further comprises configuring the derivative neural network such that, for each of the sequence of inputs, the outputs of the uppermost pair of forward and backward RNNs in respect of that input are combined so as to generate a combined output which is equivalent to the output of the BRNN in respect of that input (Vivekraja Fig. 3 and col. 6, ll. 35-44 disclose that outputs of the left-to-right layer and those of the right-to-left layer are combined by, for example, concatenating LR Output [0] with RL Output [N – 1], LR Output [1] with RL Output [N – 2], etc. [including LR Output [N - 1] with RL Output [0], i.e., the outputs of the uppermost iterations of each BRNN cell], and the combination of outputs forms the overall output [that is equivalent to the unrolled bidirectional RNN layer]).” Regarding claim 17, Vivekraja discloses “[a] data processing system for implementing a bidirectional recurrent neural network (BRNN) for operation on a sequence of inputs, each step of the BRNN being for operation on (a) an input of the sequence, (b) corresponding backward state generated in respect of a subsequent input of the sequence, and (c) corresponding forward state generated in respect of a preceding input of the sequence (input [input of sequence] of bidirectional RNN is provided to left-to-right layer is equal to the overall input; left-to-right layer receives the input value Input [0] [preceding input] and an initial state and produces the output value LR Output [0] and the state value LR State [0] [forward state] based on the input and initial state; also, the right-to-left layer receives the input value Input [0] [subsequent input] and the state RL State [1] and produces the output value RL Output [N – 1] and the state RL State [0] [backward state] – Vivekraja, col. 6, ll. 5-16), the system comprising: a transformation unit configured to receive a representation of the BRNN and transform the representation of the BRNN into a derivative neural network (bidirectional RNN layer has additional transformation layers for modifying the input and output data of a left-to-right layer [i.e., a representation of the BRNN is received]; bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) [i.e., the BRNN can be broken down into forward + backward RNNs, i.e., a derivative NN] – Vivekraja, col. 7, ll. 16-28) comprising: a forward recurrent neural network (RNN) for operation on the forward state over the inputs of the sequence (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) – Vivekraja, col. 7, ll. 16-28), and a backward recurrent neural network (RNN) for operation on the backward state over the inputs of the sequence (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) – Vivekraja, col. 7, ll. 16-28), the forward and backward RNNs being unrolled over the inputs of the sequence (bidirectional RNN layer may be unrolled into multiple RNN cells that each correspond to the same one or more operations – Vivekraja, col. 7, ll. 16-28), the derivative neural network being equivalent to the BRNN for each of the sequence of inputs (bidirectional RNN layer consists of two unidirectional RNN layers, one operating from left to right (the forward RNN) and the other operating from right to left (the reverse RNN) [i.e., the BRNN can be broken down into equivalent forward + backward RNNs] – Vivekraja, col. 7, ll. 16-28); a hardware accelerator for processing neural networks (data-flow centric processor may be a data-flow centric accelerator –Vivekraja, col. 9, ll. 8-20); and control logic configured to implement the derivative neural network at the hardware accelerator so as to perform the BRNN on the sequence of inputs (embodiments overcome the inability of data-flow centric processors [hardware] to execute bidirectional RNNs by providing techniques for modifying the tensors fed into and/or produced by the bidirectional RNNs at various stages/layers within the bidirectional RNNs [i.e., the BRNNs, including its derivative forward and backward unrolled versions, are implemented in the hardware] – Vivekraja, col. 3, ll. 7-25; data-flow centric processor may be a data-flow centric accelerator – id. at col. 9, ll. 8-20).” Regarding claim 18, Vivekraja discloses that “the hardware accelerator and the control logic are incapable of executing the received representation of the BRNN (due to the arrangement of the bidirectional RNN, zero padding the inputs fed into the unidirectional RNNs will cause the bidirectional RNN to render an erroneous output; embodiments overcome the inability [incapability] of data-flow centric processors [hardware] to execute bidirectional RNNs [as received and without further processing] by providing techniques for modifying the tensors fed into and/or produced by the bidirectional RNNs at various stages/layers within the bidirectional RNNs – Vivekraja, col. 3, ll. 2-25; data-flow centric processor may be a data-flow centric accelerator – id. at col. 9, ll. 8-20).” Regarding claim 19, Vivekraja discloses that “the forward and backward operations at each step of the BRNN are independent, each step of the forward RNN is for operation on an input of the sequence and its corresponding forward state, and each step of the backward RNN being for operation on an input of the sequence and its corresponding backward state (Vivekraja Fig. 3 shows that the BRNN is broken down into forward and reverse RNNs, that the first iteration of the forward RNN cell takes an initial LR state [forward state] and an input Input [0] [input of sequence] and produces LR State [0], the second iteration takes LR State [0] and Input [1] as input and produces LR State [1], etc., and that conversely, the first iteration of the reverse RNN cell takes an initial RL state [backward state] and an input Input [N - 1] [input of sequence] and produces RL State [N - 1], the second iteration takes RL State [N - 1] and Input [N - 2] as input and produces LR State [N - 2], etc.), wherein the control logic is configured to implement the derivative neural network by implementing the forward and backward RNNs for concurrent operation at the hardware [accelerator] so as to perform the forward and backward RNNs in parallel (acceleration engine can be a neural network accelerator that may be able to perform large scale, parallel computations of a neural network more efficiently than when the computations are performed by a CPU – Vivekraja, col. 11, ll. 42-58; col. 6, ll. 5-16 disclose that for a first iteration, the left-to-right layer produces the value LR Output [0] and state LR State [0] and that, in the same iteration, the right-to-left layer produces the output value RL Output [N – 1] and the state RL State [0] [i.e., the forward and backward operations occur concurrently]).” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Vivekraja in view of Wei et al. (US 11769035) (“Wei”). Regarding claim 3, Vivekraja appears not to disclose explicitly the further limitations of the claim. However, Wei discloses that “the transforming comprises either: unrolling the forward and backward RNNs over the predefined plurality of inputs prior to forming the derivative neural network in hardware (operations include determining, based on information related to network structure, the input data sequence, and the computing resource capacity information, an execution pattern including a rolled or unrolled execution pattern; the RNN is then executed according to the determined execution pattern – Wei, col. 8, l. 54 -col. 9, l. 20 and Fig. 5 [i.e., the determination of whether to unroll and the unrolling occur prior to execution, i.e., instantiating the network in hardware]); or unrolling the BRNN over the predefined plurality of inputs prior to forming the forward and backward RNNs.” Wei and the instant application both relate to bidirectional RNNs and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vivekraja to unroll the network prior to instantiating it in hardware, as disclosed by Wei, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would improve the hardware resource utilization by ensuring that the decision to unroll is appropriate to the hardware. See Wei, col. 2, l. 54-col. 3, l. 27. Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Vivekraja in view of Stanton et al. (US 20210035551) (“Stanton”). Regarding claim 9, the rejection of claim 1 is incorporated. Vivekraja further discloses that “the implementing the derivative neural network comprises causing the … layer to process a plurality of inputs of the sequence of inputs in parallel at the hardware (acceleration engine can be a neural network accelerator that may be able to perform large scale, parallel computations of a neural network more efficiently than when the computations are performed by a CPU – Vivekraja, col. 11, ll. 42-58; col. 6, ll. 5-16 disclose that for a first iteration, the left-to-right layer produces the value LR Output [0] and state LR State [0] and that, in the same iteration, the right-to-left layer produces the output value RL Output [N – 1] and the state RL State [0] [i.e., the forward and backward operations occur in parallel]).” Vivekraja appears not to disclose explicitly the further limitations of the claim. However, Stanton discloses that “the transforming the representation of the BRNN into a derivative neural network comprises: identifying non-causal operations which are for performance without dependence on forward or backward state (one or more convolutional filters in a CBHG neural network are non-causal convolutional filters, i.e., convolutional filters that, at a given time step T, can convolve with surrounding inputs in both directions (e.g., T – 1, T – 2 and T + 1, T + 2, etc.) [i.e., they can be performed with both forward and backward states and are not dependent solely on either one or the other] – Stanton, paragraph 55), and forming a non-causal layer of the derivative neural network by grouping together at least some of the non-causal operations (one or more convolutional filters in a CBHG neural network are non-causal convolutional filters, i.e., convolutional filters that, at a given time step T, can convolve with surrounding inputs in both directions (e.g., T – 1, T – 2 and T + 1, T + 2, etc.) [the grouping of non-causal filters in a given layer creates a non-causal layer] – Stanton, paragraph 55) ….” Stanton and the instant application both relate to bidirectional RNNs and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vivekraja to form non-causal layers for non-causal operations of the network, as disclosed by Stanton, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would relax the constraints on inputs to the network by allowing it to receive future data as inputs, thereby increasing the flexibility of the network. See Stanton, paragraph 55. Regarding claim 10, the rejection of claim 9 is incorporated. Vivekraja further discloses that “the forward and backward operations at each step of the BRNN are independent, each step of the forward RNN is for operation on an input of the sequence and its corresponding forward state, and each step of the backward RNN being for operation on an input of the sequence and its corresponding backward state (Vivekraja Fig. 3 shows that the BRNN is broken down into forward and reverse RNNs, that the first iteration of the forward RNN cell takes an initial LR state [forward state] and an input Input [0] [input of sequence] and produces LR State [0], the second iteration takes LR State [0] and Input [1] as input and produces LR State [1], etc., and that conversely, the first iteration of the reverse RNN cell takes an initial RL state [backward state] and an input Input [N - 1] [input of sequence] and produces RL State [N - 1], the second iteration takes RL State [N - 1] and Input [N - 2] as input and produces LR State [N - 2], etc.) ….” Vivekraja appears not to disclose explicitly the further limitations of the claim. However, Stanton discloses that “the identified non-causal operations are of the forward and backward RNNs (Stanton paragraph 55 and Fig. 2 show that the non-causal convolutional filters of the 1D convolutional subnetwork and the bidirectional recurrent neural network are both part of a CBHG neural network, i.e., the non-causal operations are of the RNNs by virtue of being part of the same system as the RNNs).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vivekraja to perform non-causal operations of the RNNs, as disclosed by Stanton, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would relax the constraints on inputs to the network by allowing it to receive future data as inputs, thereby increasing the flexibility of the network. See Stanton, paragraph 55. Regarding claim 11, Vivekraja, as modified by Stanton, discloses that “the grouping together comprises combining the at least some non-causal operations for performance as a single convolution operation over the plurality of inputs of the sequence of inputs (one or more convolutional filters in a CBHG neural network are non-causal convolutional filters, i.e., convolutional filters that, at a given time step T, can convolve with surrounding inputs in both directions (e.g., T – 1, T – 2 and T + 1, T + 2, etc.) [the grouping of non-causal filters in a given layer creates a non-causal layer performing a single convolution operation] – Stanton, paragraph 55).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vivekraja to perform multiple non-causal operations at once, as disclosed by Stanton, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would relax the constraints on inputs to the network by allowing it to receive future data as inputs, thereby increasing the flexibility of the network. See Stanton, paragraph 55. Allowable Subject Matter Claims 8 and 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN C VAUGHN/ Primary Examiner, Art Unit 2125 1 It is unclear how it can be the case both that the BRNN contains multiple layers and that each layer of the BRNN itself constitutes a BRNN. To ensure consistency, Examiner will construe the “representing” language as meaning that each layer performs functions of a BRNN (as opposed to functions of other types of neural networks).
Read full office action

Prosecution Timeline

Jun 29, 2022
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602448
PROGRESSIVE NEURAL ORDINARY DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602610
CLASSIFICATION BASED ON IMBALANCED DATASET
2y 5m to grant Granted Apr 14, 2026
Patent 12561583
Systems and Methods for Machine Learning in Hyperbolic Space
2y 5m to grant Granted Feb 24, 2026
Patent 12541703
MULTITASKING SCHEME FOR QUANTUM COMPUTERS
2y 5m to grant Granted Feb 03, 2026
Patent 12511526
METHOD FOR PREDICTING A MOLECULAR STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
81%
With Interview (+19.4%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month