Prosecution Insights
Last updated: April 19, 2026
Application No. 17/852,964

NUMBER FORMAT SELECTION FOR BIDIRECTIONAL RECURRENT NEURAL NETWORKS

Final Rejection §103
Filed
Jun 29, 2022
Examiner
MOUNDI, ISHAN NMN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
2 (Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 16 resolved
-42.5% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
41 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Claims 1, 18, and 20 have been amended. Claims 1-20 remain pending in the application. The amendment filed 11/28/2025 is sufficient to overcome the 35 U.S.C. 101 rejections of claims 1-20. The previous rejections have been withdrawn. Response to Arguments Argument 1, regarding the 101 rejections, applicant argues that the claims integrate the abstract ideas into the practical application of applying a number format selection algorithm to a bidirectional recurrent neural network (BRNN) by deriving a common number format which results in increased accuracy, efficiency, and reduction in size of a BRNN. Examiner agrees and the 35 U.S.C. 101 rejections have been withdrawn. Argument 2, regarding the prior art rejections, applicant argues that Peyser does not teach operating the test neural network on the sequence of input tensors and collecting statistics for provision to a number format selection algorithm because P0029 of Peyser recites providing numeric data to an end-to-end model as input instead of an end-to-end model being used to collect numeric data. Examiner respectfully disagrees. After reviewing P0029 of Peyser, examiner does not see numeric data being provided as input to the end-to-end model. Furthermore, P0030 of Peyser recites the model collecting acoustic frames as input and analyzing the input to recognize numeric data. As recited in the original rejection, P0005 and P0010 explicitly recite an end-to-end decoder model being used to obtain numeric data. Thus examiner concludes that Peyser teaches operating the test neural network on the sequence of input tensors and collecting statistics for provision to a number format selection algorithm (In view of page 5 of the specification of the instant application, the number format selection algorithm may be an end-to-end model. An end-to-end model may collect sequences of numeric data, ultimately to determine a numeric representation, P0005, P0010). Applicant also argues that there is no motivation to combine the teachings of Le Roux and Peyser. Examiner respectfully disagrees. Le Roux and Peyser are both directed towards using machine learning to analyze acoustic data. The teachings of Peyser, specifically analyzing acoustic data for the purposes of collecting numeric data, are relevant to the teachings of Le Roux because Le Roux recites analyzing acoustic frames to generate corresponding transcription outputs (see Le Roux P0072). Applicant also argues that the cited portion of Liu does not teach “applying a number format selection algorithm to the statistics so as to derive a common number format for a plurality of instances of one or more selected tensors of the test neural network”. Applicant argues that the cited portion of Liu is not directed towards applying a number format selection algorithm to statistics. Examiner respectfully disagrees because Liu explicitly recites applying backpropagation to numeric representations of data (see Liu C3:L22-25). Applicant acknowledges that the number format selection algorithm taught in Liu may be intended to be applied to the numeric data taught in Peyser, but argues that there is no motivation to combine the references in this manner. Examiner respectfully disagrees because both Peyser and Liu are combined in view of Le Roux and as recited in the rejection below, one would have been motivated to make such a combination of back propagation to quantize weight values and adjust their numeric format (see Liu C3:L22-25, C4:L14-17) and backpropagating derivatives of a loss function to train a dropout neural network (see Le Roux P0116, P0089). Applicant also argues that the motivation above is not applicable because merely reciting back-propagation in both Le Roux and Liu is not enough of a motivation to combine the references. Examiner respectfully disagrees because back-propagation merely being recited in both references is not the entire motivation. Both references recite the use of back-propagation in a neural network used to analyze acoustic and audio data (see Liu C2:L50-61, C3:L5-17 and see Le Roux P0005, P0045). Applicant also argues that none of the cited art teaches “using the derived common number format to configure a hardware implementation of the BRNN, the one or more selected tensors of the BRNN being represented in the derived common number format”. Examiner respectfully disagrees because Liu teaches using the derived common number format to configure a hardware implementation of the BRNN, the one or more selected tensors of the BRNN being represented in the derived common number format (“The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in compression hardware… the storage medium can be integral to the compression device. The compression device and the storage medium can reside in an ASIC. The ASIC can reside in a device configured to train neural network models”, C19:L6-8, C19:L18-22. Under the broadest reasonable interpretation, tensors are interpreted to be multi-dimensional vectors, with a one-dimensional vector being a rank-1 tensor. Input and output signals for a neural network to be implemented in hardware may be vectors, C1:L8-10, C1:L23-25). The full prior art rejections are outlined below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Le Roux et al (Pub. No.: US 20210319784 A1), hereafter Le Roux, in view of Peyser et al (Pub. No.: US 20200349922 A1), hereafter Peyser, and Liu et al (US 10229356 B1), hereafter Liu. Regarding claims 1, 18, and 20, Le Roux teaches receiving a representation of the BRNN (Encoder module receives a recurrent neural network which may be a bidirectional RNN, P0089); implementing the representation of the BRNN over a sequence of input tensors as a test neural network, each step of the test neural network being for operation on (a) an input tensor of the sequence (The encoder model takes input signals and generates outputs based on data corresponding to their time steps, P0075. Under the broadest reasonable interpretation, tensors are interpreted to be multi-dimensional vectors, with a one-dimensional vector being a rank-1 tensor. Input signals may be vectors, P0101), (b) a corresponding backward state tensor generated in respect of a subsequent input tensor of the sequence (Backward recurrent neural network generates vectors with respect to input vectors, P0091), and (c) a corresponding forward state tensor generated in respect of a preceding input tensor of the sequence (Forward recurrent neural network generates vectors with respect to an input vectors, P0090), the test neural network comprising: a forward recurrent neural network (RNN) for operation on the forward state tensors over the input tensors of the sequence, and a backward recurrent neural network (RNN) for operation on the backward state tensors over the input tensors of the sequence (Forward recurrent neural network and backward recurrent neural network operate on forward and backward vectors respectively based on their input vectors, P0089-P0091). Le Roux does not appear to explicitly teach “operating the test neural network on the sequence of input tensors and collecting statistics for provision to a number format selection algorithm”. Peyser teaches operating the test neural network on the sequence of input tensors and collecting statistics for provision to a number format selection algorithm (In view of page 5 of the specification of the instant application, the number format selection algorithm may be an end-to-end model. An end-to-end model may collect sequences of numeric data, ultimately to determine a numeric representation, P0005, P0010). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Le Roux and Peyser before them, to include Peyser’s specific teaching of collecting data for an end-to-end model to determine a numeric representation in Le Roux’s method of detecting adversarial attacks. One would have been motivated to make such a combination of collecting numeric data to determine a numeric representation with an end-to-end model which may be part of a bidirectional RNN (see Peyser P0010, P0039) and backpropagating derivatives of a loss function to train a dropout neural network which operates in the same system as the bidirectional RNN (see Le Roux P0116, P0089). Le Roux in view of Peyser does not appear to explicitly teach “applying a number format selection algorithm to the statistics so as to derive a common number format for a plurality of instances of one or more selected tensors of the test neural network; and using the derived common number format to configure a hardware implementation of the BRNN, the one or more selected tensors of the BRNN being represented in the derived common number format”. Liu teaches applying a number format selection algorithm to the statistics so as to derive a common number format for a plurality of instances of one or more selected tensors of the test neural network (Back-propagation may be used to improve the accuracy of the model by increasing the precision of the numeric representations for the weights, C3:L22-25. “the quantization may be a combination of these techniques such as adjusting the number of decimals or numeric format used to represent weights followed by range fixing”, C4:L14-17); and using the derived common number format to configure a hardware implementation of the BRNN, the one or more selected tensors of the BRNN being represented in the derived common number format (“The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in compression hardware… the storage medium can be integral to the compression device. The compression device and the storage medium can reside in an ASIC. The ASIC can reside in a device configured to train neural network models”, C19:L6-8, C19:L18-22. Under the broadest reasonable interpretation, tensors are interpreted to be multi-dimensional vectors, with a one-dimensional vector being a rank-1 tensor. Input and output signals for a neural network to be implemented in hardware may be vectors, C1:L8-10, C1:L23-25). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Le Roux, Peyser, and Liu before them, to include Liu’s specific teaching of using back propagation to quantize weight values and adjust their numeric format in Le Roux’s method of detecting adversarial attacks. One would have been motivated to make such a combination of back propagation to quantize weight values and adjust their numeric format (see Liu C3:L22-25, C4:L14-17) and backpropagating derivatives of a loss function to train a dropout neural network (see Le Roux P0116, P0089). Regarding claim 2, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches wherein the BRNN is a parallel BRNN or a sequential BRNN (bidirectional RNN is sequential, P0089). Regarding claim 3, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches wherein the test neural network is configured such that the forward and backward RNNs operate independently on each input tensor, each step of the forward RNN being for operation on an input tensor of the sequence and its corresponding forward state tensor, and each step of the backward RNN being for operation on an input tensor of the sequence and its corresponding backward state tensor (forward RNN and backward RNN operate independently, with the forward RNN focused on input vectors corresponding to forward sequences and the backward RNN focused on input vectors corresponding to backward sequences, P0089-P0091). Regarding claim 4, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches wherein the test neural network comprises a plurality of steps, each step being for operation on a different input tensor of the sequence (Sequences of data and vectors are operated on in time steps, P0073, P0075). Regarding claim 5, Le Roux in view of Peyser and Liu teaches the limitations of claim 4 as outlined above. Peyser further teaches wherein the applying the format selection algorithm comprises applying the number format selection algorithm to the statistics captured over all of the plurality of steps, the common number format being output by the number format selection algorithm (E2E (end-to-end) decoder model includes an encoder which reads a sequence of d-dimensional feature vectors to generate higher-order feature representations at each time step, P0035. “The attention/decoder portion is configured process non-trivial tags from the tagger portion to obtain a numeric representation for the numeric sequence of the utterance in the written domain”, P0015). Regarding claim 6, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches wherein the forward RNN is configured to generate a set of forward output tensors and the backward RNN is configured to generate a set of backward output tensors (Forward RNN is configured to generate forward output vectors and the backward RNN is configured to generate backward vectors, P0089-P0091), and the one or more selected tensors includes both the forward output tensor and the backward output tensor (Forward and backward hidden vectors are concatenated to form a vector sequence, P0092-P0093). Regarding claim 7, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Liu further teaches wherein the common number format is a block-configurable number format defined by one or more configurable parameters (Numeric representations may be adjusted by tuning weights of a model for rounding precision or adjusting the number of decimals, C4:L4-17). Regarding claim 8, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Liu further teaches wherein the number format selection algorithm is configured to identify a block-configurable number format of a predefined type of block-configurable number format (“Whether referencing the forward pass or the back-propagation, another way to improve the accuracy of the model is to increase the precision of the numeric representations for the weights.”, C3:L22-25). Regarding claim 9, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Liu further teaches wherein applying the number format selection algorithm comprises: independently identifying a number format for each instance of the one or more selected tensors in the test neural network (“When quantizing a model, the floating point weight values are quantized into weight values which require less memory to represent, such as fixed point or integers…Each quantized weight value may represent a step or bucket into which weight values of a given range would be placed.”, C3:L44-47, C3:L58-60); and combining the number formats for the plurality of instances of the one or more selected tensors so as to derive the common number format for the plurality of instances of the one or more selected tensors (“the quantization may be a combination of these techniques such as adjusting the number of decimals or numeric format used to represent weights followed by range fixing”, C4:L14-17). Regarding claim 10, Le Roux in view of Peyser and Liu teaches the limitations of claim 9 as outlined above. Liu further teaches wherein the number format selection algorithm is configured to identify a block-configurable number format defined by one or more configurable parameters for each instance of the one or more selected tensors (“Whether referencing the forward pass or the back-propagation, another way to improve the accuracy of the model is to increase the precision of the numeric representations for the weights.”, C3:L22-25). Regarding claim 11, Le Roux in view of Peyser and Liu teaches the limitations of claim 10 as outlined above. Liu further teaches wherein the combining comprises independently combining each of the one or more configurable parameters of the block-configurable number formats identified for each instance of the one or more selected tensors so as to define the one or more configurable parameters for the common number format (“the quantization may be a combination of these techniques such as adjusting the number of decimals or numeric format used to represent weights followed by range fixing”, C4:L14-17). Regarding claim 12, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Liu further teaches wherein the operating the test neural network is performed with each instance of the one or more selected tensors in a floating point number format (“Floating point numbers can be used to convey more numerical information than fixed point numbers because floating point numbers to not fix the number of decimal places available for a number. Models which are trained using floating point numbers can therefore represent more states than a fixed point based model”, C3:L25-30). Regarding claim 13, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches grouping together operations of the BRNN performed in dependence on forward state generated in respect of a preceding input of the sequence so as to form the forward RNN (In the forward RNN, vector computations are dependent on forward time series inputs, P0090); grouping together operations of the BRNN performed in dependence on backward state generated in respect of a subsequent input of the sequence so as to form the backward RNN (In the backward RNN, vector computations are dependent on backward time series inputs, P0091); and unrolling the forward and backward RNNs over the sequence of input tensors (forward and backward RNNs are represented as a sequence of vectors corresponding to a time step in the input sequence, P0089-P0091). Regarding claim 14, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches the forward recurrent neural network (RNN) for operation on forward state generated, for each input, in respect of a preceding input of the sequence (Forward RNN operates on forward time-series input data, P0090); and the backward recurrent neural network (RNN) for operation on backward state generated, for each input, in respect of a subsequent input of the sequence (Backward RNN operates on backward time-series input data, P0091). Regarding claim 15, Le Roux in view of Peyser and Liu teaches the limitations of claim 14 as outlined above. Le Roux further teaches wherein the number of inputs in the sequence of inputs is different from the number of inputs tensors in the sequence of input tensors (The value for the input gates in each of the forward and backward RNNs is calculated with the value of the input sequence vector, meaning the values for each are different, P0090-P0091). Regarding claim 16, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches wherein the sequence of input tensors comprise exemplary input values selected to represent a typical or expected range of input values to the BRNN when implemented in hardware for operation on the sequence of inputs (“embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of… hardware,”, P0146). Regarding claim 17, Le Roux in view of Peyser and Liu teaches the limitations of claim 1 as outlined above. Le Roux further teaches the forward recurrent neural network (RNN) for operation on forward state generated, for each input, in respect of a preceding input of the sequence (Forward RNN performs computations on forward time series input data, P0090), and the backward recurrent neural network (RNN) for operation on backward state generated, for each input, in respect of a subsequent input of the sequence (Backward RNN performs computations on backward time series input data, P0091). Liu further teaches performing the derivative neural network in hardware on the sequence of inputs using the common number format selected as set forth in claim 1 (The backpropagation and quantization of values for a common numeric representation may be performed on a neural network implemented in hardware, C16:L65-67, C3:L22-25, C4:L14-17). Regarding claim 19, Le Roux in view of Peyser and Liu teaches the limitations of claim 18 as outlined above. Le Roux further teaches further comprising a hardware accelerator for processing neural networks, wherein the control logic is further configured to cause the BRNN to be performed in hardware (“embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of… hardware,”, P0146). Peyser further teaches …by implementing the test neural network at the hardware accelerator using the common number format for the one or more selected tensors (Implementation may include the use of an accelerator, P0068, and data processing hardware, P0006). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.N.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jun 29, 2022
Application Filed
Aug 22, 2025
Non-Final Rejection — §103
Nov 28, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561970
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
46%
With Interview (+33.3%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month